0

我想将这两个类(Product_Items 和 Variant_Product)的所有数据保存为 JSON 输出文件。 getProductDetails():在这个函数中,我想提取product_variants列表中的第一个元素的数据并将其添加到 dict(item_list) 中,对于其余元素,我正在创建一个 req 以递归地访问相同的函数,直到我拥有所有我的 dict(item_list) 中的键。在函数结束时,我想将提取的数据写入 JSON 文件,但我无法从函数返回两个值。

同样,在getListingDetails()函数中,我需要将项目保存为 JSON 文件。请帮忙!!!

以下是片段:

import scrapy
from scrapy.http import Request
from scrapy.selector import Selector
from scrapy.item import Item, Field
import re,json

class Product_Items(Item):
    Image_URL = Field()
    Product_Title = Field()
    Price = Field()
    PPU_Price = Field()
    Product_URL = Field()
    Product_SKU = Field()
    Product_UPC = Field()
    
class Variant_Product(Item):
    Image_URL = Field()
    Product_Title = Field()
    Price = Field()
    PPU_Price = Field()
    Product_URL = Field()
    Product_SKU = Field()
    Product_UPC = Field()
    Product_Size = Field()
    Meta = Field()
    
class walmartSpider(scrapy.Spider):
    name = "walmart"
    start_urls = ['https://www.walmart.com/all-departments']
    item_list = {}
    
    def parse(self,response):
        reqs = []
        base_url='https://www.walmart.com/'
        hxs = Selector(text=response.body)
        json_response = hxs.xpath('//script[@id="home"]//text()').get()
        data = json.loads(json_response)
        cat_urls = self.getCategoryUrls(data)
        
        for url in cat_urls:
            if url[:7] == '/browse':
                url = base_url + url
            link=Request(url=url,callback=self.getListingDetails)
            reqs.append(link)
        return reqs
        
    def getCategoryUrls(self,data):
        .....
        return final_cat_url
        
    def getListingDetails(self,response):
        reqs = []
        hxs = Selector(text=response)
        data = json.loads(hxs.xpath('//script[@id="searchContent"]//text()').get())
        products = data['searchContent']['preso']['items']
        item = Product_Items()
        for product in products:
            item['Image_URL'] = product['imageUrl']
            item['Product_Title'] = product['title']
            item['Product_URL'] = base_url + product['productPageUrl']
            item['Product_SKU'] = product['productId']
            item['Product_UPC'] = product['standardUpc'][0]
            try:
                item['PPU_Price'] = product['primaryOffer']['unitPriceDisplayCondition']
            except:
                item['PPU_Price'] = ''
            try:
                regular_price = product['primaryOffer']['offerPrice']
            except:
                regular_price = ''
                
            if regular_price:
                item['Price'] = product['primaryOffer']['offerPrice']
            else:
                product_req = Request(url=item['Product_URL'],callback=self.getProductDetails)
                reqs.append(product_req)
                
           **Want to save this item as JSON file**

            **#Pagination**
            try:
                next_page = data['searchContent']['preso']['pagination']['next']['url']
            except:
                next_page = ''
                
            if next_page:
                next_page_url = str(re.findall(r'^[\S]+\?',response.url)[0])+str(next_page)
                req = Request(url=next_page_url,callback=self.getListingDetails)
                reqs.append(req)
        return reqs


    def getProductDetails(self,response):
        reqs = []
        base_url = 'https://www.walmart.com/ip/'
        hxs = Selector(text=response)
        variant = Variant_Product()
        prod_data = json.loads(hxs.xpath('//script[@id="item"]//text()').get())
        product_variants = prod_data['item']['product']['buyBox']['products']
        for product_variant in product_variants[1:]:
            item_id = product_variant['usItemId']
            if item_id not in self.item_list.keys():
                self.item_list[item_id] = ''
                req = Request(url=base_url+str(item_id),callback=self.getProductDetails)
                reqs.append(req)
        
        product_0 = prod_data['item']['product']['buyBox']['products'][0]
        variant['Product_Title'] = product_0['productName']
        variant['Product_SKU'] = product_0['walmartItemNumber']
        variant['Product_UPC'] = product_0['upc']
        variant['Product_Size'] = product_0['variants'][0]['value']
        variant['Product_URL'] = product_0['canonicalUrl ']
        variant['Price'] = product_0['priceMap']['price']
        variant['PPU_Price'] = product_0['priceMap']['unitPriceDisplayValue']
        variant['Meta'] = (product_0['categoryPath']).replace('Home Page/','')
        
        **Want to save this item as JSON file**
        return reqs
4

1 回答 1

0

根据scrapy docs,有几个内置的“导出器”可以将您的数据序列化为几种不同的格式(包括JSON)。

您应该能够执行以下操作:

# ...
from scrapy.exporters import JsonItemExporter

# ...
    def getListingDetails(self, response):
        # ...
        for product in products:
            item = Product_Items(
                        Image_URL = product['imageUrl'],
                        Product_Title = product['title'],
                        Product_URL = base_url + product['productPageUrl'],
                        Product_SKU = product['productId'],
                        Product_UPC = product['standardUpc'][0],
                        PPU_Price = product.get('primaryOffer', {}).get('unitPriceDisplayCondition', ''),
                        Price = product.get('primaryOffer', {}).get('offerPrice', '')
                    )
            if not item['Price']:
                product_req = Request(url=item['Product_URL'],callback=self.getProductDetails)
                reqs.append(product_req)

            JsonItemExporter(open(f"{item['Product_SKU']}.json", "wb")).export_item(item)

一些注意事项:

  • JsonItemExporter.__init__方法需要一个类似文件的对象,其 write 方法接受字节,这就是为什么"wb"
  • dict.get()在 Python 中,允许您指定默认值作为第二个参数,以防键不存在(此处并非绝对必要,但减少了try/except逻辑)
  • 在处理异常时,PEP8 标准建议捕获更具体的异常类型(在上述情况下,except KeyError:可能是合适的),而不仅仅是一个简单的except子句

请让我知道以上是否适合您!

于 2021-05-27T00:30:29.990 回答