我正在尝试使用请求模块从网页的第二页抓取不同的代理名称。我可以通过向 URL 发送获取请求来解析其登录页面中的名称。
但是,当要从它的第二页和后面访问名称时,我需要发送 post http 请求以及适当的参数。我试图完全按照我在开发工具中看到的方式模仿发布请求,但我得到的回报如下:
<?xml version='1.0' encoding='UTF-8'?>
<partial-response id="j_id1"><redirect url="/ptn/exceptionhandler/sessionExpired.xhtml"></redirect></partial-response>
这就是我尝试过的方式:
import requests
from bs4 import BeautifulSoup
from pprint import pprint
link = 'https://www.gebiz.gov.sg/ptn/opportunity/BOListing.xhtml?origin=menu'
url = 'https://www.gebiz.gov.sg/ptn/opportunity/BOListing.xhtml'
with requests.Session() as s:
s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'
r = s.get(link)
soup = BeautifulSoup(r.text,"lxml")
payload = {
'contentForm': 'contentForm',
'contentForm:j_idt171_windowName': '',
'contentForm:j_idt187_listButton2_HIDDEN-INPUT': '',
'contentForm:j_idt192_searchBar_INPUT-SEARCH': '',
'contentForm:j_idt192_searchBarList_HIDDEN-SUBMITTED-VALUE': '',
'contentForm:j_id135_0': 'Title',
'contentForm:j_id135_1': 'Document No.',
'contentForm:j_id136': 'Match All',
'contentForm:j_idt853_select': 'ON',
'contentForm:j_idt859_select': '0',
'javax.faces.ViewState': soup.select_one('input[name="javax.faces.ViewState"]')['value'],
'javax.faces.source': 'contentForm:j_idt902:j_idt955_2_2',
'javax.faces.partial.event': 'click',
'javax.faces.partial.execute': 'contentForm:j_idt902:j_idt955_2_2 contentForm:j_idt902',
'javax.faces.partial.render': 'contentForm:j_idt902:j_idt955 contentForm dialogForm',
'javax.faces.behavior.event': 'action',
'javax.faces.partial.ajax': 'true'
}
s.headers['Referer'] = 'https://www.gebiz.gov.sg/ptn/opportunity/BOListing.xhtml?origin=menu'
s.headers['Faces-Request'] = 'partial/ajax'
s.headers['Origin'] = 'https://www.gebiz.gov.sg'
s.headers['Host'] = 'www.gebiz.gov.sg'
s.headers['Accept-Encoding'] = 'gzip, deflate, br'
res = s.post(url,data=payload,allow_redirects=False)
# soup = BeautifulSoup(res.text,"lxml")
# for item in soup.select(".commandLink_TITLE-BLUE"):
# print(item.get_text(strip=True))
print(res.text)
当 url 保持不变时,如何从网页的第二页解析名称?