0

所以我想从这个网站上抓取数据,特别是从公司详细信息部分:

要抓取的网站

我从一个人那里得到了一些帮助来让它与 python playwright 一起工作,但我需要用 python scrapy-selenium 来完成这件事。

我想将代码从这里的答案重写为scrapy-selenium方式。

原始问题

我试过这样做,就像在这个问题中建议的那样

刮痧硒

但没有运气=/

我的代码:

资源/search_results_searchpage.yml:

products:
    css: 'div[data-content="productItem"]'
    multiple: true
    type: Text
    children:
        link:
            css: a.elements-title-normal 
            type: Link

爬虫.py:

import scrapy
import csv
from scrapy_selenium import SeleniumRequest
import os
from selectorlib import Extractor
from scrapy import Selector

class Spider(scrapy.Spider):
    name = 'alibaba_crawler'
    allowed_domains = ['alibaba.com']
    start_urls = ['http://alibaba.com/']
    link_extractor = Extractor.from_yaml_file(os.path.join(os.path.dirname(__file__), "../resources/search_results_searchpage.yml"))

    def start_requests(self):
        search_text="Headphones"
        url="https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&CatId=&SearchText={0}&viewtype=G".format(search_text)

        yield SeleniumRequest(url=url, callback = self.parse, meta = {"search_text": search_text})


    def parse(self, response):
        data = self.link_extractor.extract(response.text, base_url=response.url)
        for product in data['products']:
            parsed_url=product["link"]

            yield SeleniumRequest(url=parsed_url, callback=self.crawl_mainpage)
    
    def crawl_mainpage(self, response):
        driver = response.request.meta['driver']
        button = driver.find_element_by_xpath( "//span[@title='Company Profile']")
        button.click()
        driver.quit()

        yield {
            'name': response.xpath("//h1[@class='module-pdp-title']/text()").extract(),
            'Year of Establishment': response.xpath("//td[contains(text(), 'Year Established')]/following-sibling::td/div/div/div/text()").extract()
         }
        

运行代码:

scrapy crawl alibaba_crawler -o out.csv -t csv

公司名称被正确返回。成立年份仍为空,应返回年份。

4

2 回答 2

0

我没有正确使用选择器。这现在可以正常工作

def crawl_mainpage(self, response):
    driver = response.request.meta['driver']
    driver.find_element_by_xpath( "//span[@title='Company Profile']").click()
    sel = Selector(text=driver.page_source)
    driver.quit()

    yield {
    sel.xpath("//td[contains(text(), 'Year Established')]/following-sibling::td/div/div/div/text()").extract()
    }
于 2021-10-26T08:38:30.083 回答
0

请参阅下面使用scrapy-selenium库的实现。Selenium 对于网页抓取非常慢。建议使用替代方法,例如scrapy-splashscrapy-playwright。仅抓取 2 页需要 22 多秒,而 scrapy-playwright 需要不到 5 秒。

import scrapy
from scrapy.crawler import CrawlerProcess
import os
from selectorlib import Extractor
from scrapy_selenium import SeleniumRequest
from shutil import which


class Spider(scrapy.Spider):
    name = 'alibaba_crawler'
    allowed_domains = ['alibaba.com']
    start_urls = ['http://alibaba.com/']
    link_extractor = Extractor.from_yaml_file(os.path.join(
        os.path.dirname(__file__), "../resources/search_results_searchpage.yml"))

    def start_requests(self):
        search_text = "Headphones"
        url = "https://www.alibaba.com/trade/search?fsb=y&IndexArea=product_en&CatId=&SearchText={0}&viewtype=G".format(
            search_text)
        yield scrapy.Request(url, callback=self.parse, meta={"search_text": search_text})

    def parse(self, response):
        data = self.link_extractor.extract(
            response.text, base_url=response.url)
        for product in data['products']:
            parsed_url = product["link"]

            yield SeleniumRequest(url=parsed_url, callback=self.crawl_mainpage, script='document.querySelector("span[title=\'Company Profile\']").click();')

    def crawl_mainpage(self, response):
        yield {
            'name': response.xpath("//h1[@class='module-pdp-title']/text()").extract_first(),
            'Year of Establishment': response.xpath("//td[contains(text(), 'Year Established')]/following-sibling::td/div/div/div/text()").extract_first()
        }

if __name__ == "__main__":
    process = CrawlerProcess(settings={'DOWNLOADER_MIDDLEWARES': {
        'scrapy_selenium.SeleniumMiddleware': 800
    },
        'SELENIUM_DRIVER_NAME': 'chrome',
        'SELENIUM_DRIVER_EXECUTABLE_PATH': which('chromedriver'),
        'SELENIUM_DRIVER_ARGUMENTS': ['--headless']
    })
    process.crawl(Spider)
    process.start()

下面是一个示例抓取作业。请注意,我将您的extract()方法更改extract_first()为以返回字符串而不是列表。 示例运行代码片段

于 2021-10-26T17:51:32.933 回答