-1

我正在尝试使用 XPaths 获取 HTML 表的内容,我正在使用 Mechanicalsoup 获取表单并提交(数据在提交表单后面),一旦我点击第二个表单,我会获取 URL 并将其传递给解析,但我得到AttributeError: 'list' object has no attribute 'xpath'

import mechanicalsoup
import requests
from lxml import html
from lxml import etree


#This Will Use Mechanical Soup to grab the Form, Subit it and find the Data Table
browser = mechanicalsoup.StatefulBrowser()
winnet = "http://winnet.wartburg.edu/coursefinder/"
browser.open(winnet)
Searchform = browser.select_form()
Searchform.choose_submit('ctl00$ContentPlaceHolder1$FormView1$Button_FindNow')
response1 = browser.submit_selected() #This Progresses to Second Form
dataURL = 'https://winnet.wartburg.edu/coursefinder/Results.aspx' #Get URL of Second Form w/ Data


pageContent=requests.get(dataURL)
tree = html.fromstring(pageContent.content)
dataTable = tree.xpath('//*[@id="ctl00_ContentPlaceHolder1_GridView1"]')
print(dataTable)
for row in dataTable.xpath(".//tr")[1:]:
    print([cell.text_content() for cell in row.xpath(".//td")])

#XPath to Table
#//*[@id="ctl00_ContentPlaceHolder1_GridView1"]

我会发布我正在尝试解析的 HTML,但它非常长,而且从我所使用的其他一些网站的情况来看,它的编写非常草率

4

1 回答 1

1

我不确定,但我相信你在追求这样的东西。如果不是这样,您可能可以对其进行修改以使您到达想要的位置。

import pandas as pd
rows = [] #initialize a collection of rows
for row in dataTable[0].xpath(".//tr")[1:]: #add new rows to the collection
    rows.append([cell.text_content().strip() for cell in row.xpath(".//td")])

df = pd.DataFrame(rows) #load the collection to a dataframe
df

输出(请原谅格式):

查看详情 AC 121 01 会计原理 I Pilcher, AMWF 10:45AM-11:50AM 45/40/0 WBC 116 2019-20 WI 1.00

查看详情 AC 122 01 会计原理 II Pilcher, A MWF 12:00PM-1:05PM 45/42/0 WBC 116 2019-20 WI 1.00

等等

于 2020-01-27T01:36:02.640 回答