我正在抓取网站表格https://csr.gov.in/companyprofile.php?year=FY+2015-16&CIN=L00000CH1990PLC010573但我没有得到我正在寻找的确切结果。我想要此链接中的 11 列,“公司名称”、“类别”、“状态”、“公司类型”、“RoC”、“子类别”、“列表状态”。这些是 7 列,之后您可以看到一个展开按钮“2017-18 财年的 CSR 详细信息”,当您单击该按钮时,您将获得另外 4 列“平均净利润”、“CSR 规定支出”、“CSR 支出” ", "当地花费"。我想要 csv 文件中的所有这些列。我写了一个代码,它不能正常工作。我附上结果图片以供参考。这是我的代码。
from selenium import webdriver
import pandas as pd
from bs4 import BeautifulSoup
from urllib.request import urlopen
import requests
import csv
driver = webdriver.Chrome()
url_file = "csrdata.txt"
with open(url_file, "r") as url:
url_pages = url.read()
# we need to split each urls into lists to make it iterable
pages = url_pages.split("\n") # Split by lines using \n
data = []
# now we run a for loop to visit the urls one by one
for single_page in pages:
driver.get(single_page)
r = requests.get(single_page)
soup = BeautifulSoup(r.content, 'html5lib')
driver.find_element_by_link_text("CSR Details of FY 2017-18").click()
table = driver.find_elements_by_xpath("//*[contains(@id,'colfy4')]")
about = table.__getitem__(0).text
x = about.split('\n')
print(x)
data.append(x)
df = pd.DataFrame(data)
print(df)
# write to csv
df.to_csv('csr.csv')