1

上下文

我正在尝试编写自己的货币聚合器,因为市场上大多数可用工具尚未涵盖所有金融网站。我在树莓派上使用 python 2.7.9。

多亏了 requests 库,我设法连接到我的 2 个帐户(一个是众筹网站,一个是我的养老金)。我试图聚合的第三个网站从 2 周开始就给我带来了困难,它的名字是https://www.amundi-ee.com

我发现该网站实际上使用的是 JavaScript,经过多次研究后,我最终使用了 dryscrape(我不能使用 selenium,因为不再支持 Arm)。

问题

运行此代码时:

import dryscrape

url='https://www.amundi-ee.com'
extensionInit='/psf/#login'
extensionConnect='/psf/authenticate'
extensionResult='/psf/#'
urlInit = url + extensionInit
urlConnect = url + extensionConnect
urlResult = url + extensionResult

s = dryscrape.Session()
s.visit(urlInit)
print s.body()
login = s.at_xpath('//*[@id="identifiant"]')
login.set("XXXXXXXX")
pwd = s.at_xpath('//*[@name="password"]')
pwd.set("YYYYYYY")
# Push the button
login.form().submit()
s.visit(urlConnect)
print s.body()
s.visit(urlResult)

代码访问 urlConnect 第 21 行时出现问题,正文打印第 22 行返回以下内容:

{"code":405,"message":"No route found for \u0022GET \/authenticate\u0022: Method Not Allowed (Allow: POST)","errors":[]}

问题

为什么我有这样的错误信息,我怎样才能正确登录网站来检索我正在寻找的数据?

PS:我的代码灵感来自this issue Python dryscrape scrape page with cookies

4

1 回答 1

0

好的,经过一个多月的努力解决这个问题,我很高兴地说我终于得到了我想要的东西

问题是什么?

基本上两件主要的事情(也许更多,但我可能已经忘记了):

  1. 密码必须通过按钮按下,这些密码是随机生成的,因此每次访问时都需要进行新的映射
  2. login.form().submit()正在搞乱对所需数据页面的访问,通过单击验证按钮就足够了

这是最终代码,如果您发现不好的用法,请不要犹豫,因为我是 python 新手和零星的编码器。

import dryscrape
from bs4 import BeautifulSoup
from lxml import html
from time import sleep
from webkit_server import InvalidResponseError
from decimal import Decimal
import re
import sys 


def getAmundi(seconds=0):

    url = 'https://www.amundi-ee.com/psf'
    extensionInit='/#login'
    urlInit = url + extensionInit
    urlResult = url + '/#'
    timeoutRetry=1

    if 'linux' in sys.platform:
        # start xvfb in case no X is running. Make sure xvfb 
        # is installed, otherwise this won't work!
        dryscrape.start_xvfb()

    print "connecting to " + url + " with " + str(seconds) + "s of loading wait..." 
    s = dryscrape.Session()
    s.visit(urlInit)
    sleep(seconds)
    s.set_attribute('auto_load_images', False)
    s.set_header('User-agent', 'Google Chrome')
    while True:
        try:
            q = s.at_xpath('//*[@id="identifiant"]')
            q.set("XXXXXXXX")
        except Exception as ex:
            seconds+=timeoutRetry
            print "Failed, retrying to get the loggin field in " + str(seconds) + "s"
            sleep(seconds)
            continue
        break 

    #get password button mapping
    print "loging in ..."
    soup = BeautifulSoup(s.body())
    button_number = range(10)
    for x in range(0, 10):
     button_number[int(soup.findAll('button')[x].text.strip())] = x

    #needed button
    button_1 = button_number[1] + 1
    button_2 = button_number[2] + 1
    button_3 = button_number[3] + 1
    button_5 = button_number[5] + 1

    #push buttons for password
    button = s.at_xpath('//*[@id="num-pad"]/button[' + str(button_2) +']')
    button.click()
    button = s.at_xpath('//*[@id="num-pad"]/button[' + str(button_1) +']')
    button.click()
    ..............

    # Push the validate button
    button = s.at_xpath('//*[@id="content"]/router-view/div/form/div[3]/input')
    button.click()
    print "accessing ..."
    sleep(seconds)

    while True:
        try:
            soup = BeautifulSoup(s.body())
            total_lended = soup.findAll('span')[8].text.strip()
            total_lended = total_lended = Decimal(total_lended.encode('ascii','ignore').replace(',','.').replace(' ',''))
            print total_lended

        except Exception as ex:
            seconds+=1
            print "Failed, retrying to get the data in " + str(seconds) + "s"
            sleep(seconds)
            continue
        break 

    s.reset()
于 2017-07-02T06:46:44.053 回答