我在抓取数据方面不是很有经验,所以这里的问题对某些人来说可能很明显。
我想要的是从 wunderground.com 抓取历史每日天气数据,而无需支付 API。也许根本不可能。
我的方法是简单地使用requests.get
整个文本并将其保存到文件中(下面的代码)。
结果不是获取可以从 Web 浏览器访问的表(见下图),而是一个包含除这些表之外的几乎所有内容的文件。像这样的东西:
摘要
未记录数据
每日观察
未记录
数据
奇怪的是,如果我用 Firefox 保存为网页,结果取决于我是选择“网页,仅 HTML”还是“网页,完整”:后者包括我感兴趣的数据,前者没有。
这可能是故意的,所以没有人会刮他们的数据吗?我只是想确保没有解决此问题的方法。
提前致谢,胡安
注意:我尝试使用用户代理字段无济于事。
# Note: I run > set PYTHONIOENCODING=utf-8 before executing python
import requests
# URL with wunderground weather information for a specific date:
date = '2019-03-12'
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/' + date
r = requests.get(url)
# Write a file to check if the tables ar being retrieved:
with open('test.html', 'wb') as testfile:
testfile.write(r.text.encode('utf-8'))
更新:找到解决方案
感谢将我指向 selenium 模块,这是我需要的确切解决方案。该代码提取给定日期的 URL 上存在的所有表格(如正常访问该站点时所见)。它需要修改,以便能够抓取日期列表并组织创建的 CSV 文件。
注意:工作目录中需要geckodriver.exe。
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
import requests, sys, re
# URL with wunderground weather information
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-3-12'
# Commands related to the webdriver (not sure what they do, but I can guess):
bi = FirefoxBinary(r'C:\Program Files (x86)\Mozilla Firefox\\firefox.exe')
br = webdriver.Firefox(firefox_binary=bi)
# This starts an instance of Firefox at the specified URL:
br.get(url)
# I understand that at this point the data is in html format and can be
# extracted with BeautifulSoup:
sopa = BeautifulSoup(br.page_source, 'lxml')
# Close the firefox instance started before:
br.quit()
# I'm only interested in the tables contained on the page:
tablas = sopa.find_all('table')
# Write all the tables into csv files:
for i in range(len(tablas)):
out_file = open('wunderground' + str(i + 1) + '.csv', 'w')
tabla = tablas[i]
# ---- Write the table header: ----
table_head = tabla.findAll('th')
output_head = []
for head in table_head:
output_head.append(head.text.strip())
# Some cleaning and formatting of the text before writing:
encabezado = '"' + '";"'.join(output_head) + '"'
encabezado = re.sub('\s', '', encabezado) + '\n'
out_file.write(encabezado.encode(encoding='UTF-8'))
# ---- Write the rows: ----
output_rows = []
filas = tabla.findAll('tr')
for j in range(1, len(filas)):
table_row = filas[j]
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text.strip())
# Some cleaning and formatting of the text before writing:
fila = '"' + '";"'.join(output_row) + '"'
fila = re.sub('\s', '', fila) + '\n'
out_file.write(fila.encode(encoding='UTF-8'))
out_file.close()
额外:@QHarr 的答案效果很好,但我需要进行一些修改才能使用它,因为我在我的 PC 中使用了 firefox。重要的是要注意,为此我必须将geckodriver.exe文件添加到我的工作目录中。这是代码:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
bi = FirefoxBinary(r'C:\Program Files (x86)\Mozilla Firefox\\firefox.exe')
driver = webdriver.Firefox(firefox_binary=bi)
# driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))