1

我有以下蜘蛛,它几乎应该发布到表单。我似乎无法让它工作。当我通过 Scrapy 执行此操作时,响应永远不会显示。有人能告诉我我哪里出了问题吗?

这是我的蜘蛛代码:

# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import scrapy
from scrapy.http import FormRequest
from scrapy.shell import inspect_response


class RajasthanSpider(scrapy.Spider):
    name = "rajasthan"
    allowed_domains = ["rajtax.gov.in"]
    start_urls = (
        'http://www.rajtax.gov.in/',
    )

    def parse(self, response):
        return FormRequest.from_response(
            response,
            formname='rightMenuForm',
            formdata={'dispatch': 'dealerSearch'},
            callback=self.dealer_search_page)

    def dealer_search_page(self, response):

        yield FormRequest.from_response(
            response,
            formname='dealerSearchForm',
            formdata={
                "zone": "select",
                "dealertype": "VAT",
                "dealerSearchBy": "dealername",
                "name": "ana"
            }, callback=self.process)

    def process(self, response):
        inspect_response(response, self)

我得到的是这样的回应: 未找到结果

我应该得到的是这样的结果: 找到的结果

当我dealer_search_page()用 Splash 替换 my 时:

def dealer_search_page(self, response):

    yield FormRequest.from_response(
        response,
        formname='dealerSearchForm',
        formdata={
            "zone": "select",
            "dealertype": "VAT",
            "dealerSearchBy": "dealername",
            "name": "ana"
        },
        callback=self.process,
        meta={
            'splash': {
                'endpoint': 'render.html',
                'args': {'wait': 0.5}
            }
        })

我收到以下警告:

2016-03-14 15:01:29 [scrapy] WARNING: Currently only GET requests are supported by SplashMiddleware; <POST http://rajtax.gov.in:80/vatweb/dealerSearch.do> will be handled without Splash

inspect_response()并且程序在到达我的process()函数之前退出。

该错误表明 Splash 尚不支持POST。将Splash适用于这个用例还是我应该使用Selenium

4

2 回答 2

3

现在 Splash 支持 POST 请求。尝试SplashFormRequest{'splash':{'http_method':'POST'}}

基于https://github.com/scrapy-plugins/scrapy-splash

于 2017-02-24T00:04:43.037 回答
2

您可以使用selenium. 这是一个完整的工作示例,我们使用与 Scrapy 代码中相同的搜索参数提交表单并在控制台上打印结果:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


driver = webdriver.Firefox()
driver.get("http://www.rajtax.gov.in/")

# accept the alert
driver.switch_to.alert.accept()

# open "Search for Dealers"
wait = WebDriverWait(driver, 10)
search_for_dealers = wait.until(EC.visibility_of_element_located((By.PARTIAL_LINK_TEXT, "Search for Dealers")))
search_for_dealers.click()

# set search parameters
dealer_type = Select(driver.find_element_by_name("dealertype"))
dealer_type.select_by_visible_text("VAT")

search_by = Select(driver.find_element_by_name("dealerSearchBy"))
search_by.select_by_visible_text("Dealer Name")

search_criteria = driver.find_element_by_name("name")
search_criteria.send_keys("ana")

# search
driver.find_element_by_css_selector("table.vattabl input.submit").click()

# wait for and print results
table = wait.until(EC.visibility_of_element_located((By.XPATH, "//table[@class='pagebody']/following-sibling::table")))

for row in table.find_elements_by_css_selector("tr")[1:]:  # skipping header row
    print(row.find_elements_by_tag_name("td")[1].text)

打印搜索结果表中的 TIN 编号:

08502557052
08451314461
...
08734200736

请注意,您使用的自动化浏览器可以是虚拟显示器selenium上的无头PhantomJS浏览器或常规浏览器。


回答最初的问题(在编辑之前):

我在经销商搜索页面上看到的 - 表单及其字段是由在浏览器中执行的一堆 JavaScript 脚本构成的。Scrapy 无法执行 JS,这部分你需要帮助它。我很确定Scrapy+Splash在这种情况下就足够了,你不需要进入浏览器自动化。这是一个使用 Scrapy 和 Splash 的工作示例:

于 2016-03-13T11:51:59.697 回答