好吧,我到这里就无计可施了。对于我的班级,我们应该从 wunderground.com 网站上抓取数据。我们不断遇到问题(错误消息),或者代码可以正常运行,但 .txt 文件将不包含任何数据。这很烦人,因为我需要这样做!所以这是我的代码。
f = open('wunder-data1.txt', 'w')
for m in range(1, 13):
for d in range(1, 32):
if (m == 2 and d > 28):
break
elif (m in [4, 6, 9, 11] and d > 30):
break
url = "http://www.wunderground.com/history/airport/KBUF/2009/" + str(m) + "/" + str(d) + "/DailyHistory.html"
page = urllib2.urlopen(url)
soup = BeautifulSoup(page, "html.parser")
dayTemp = soup.find("span", text="Mean Temperature").parent.find_next_sibling("td").get_text(strip=True)
if len(str(m)) < 2:
mStamp = '0' + str(m)
else:
mStamp = str(m)
if len(str(d)) < 2:
dStamp = '0' +str(d)
else:
dStamp = str(d)
timestamp = '2009' + mStamp +dStamp
f.write(timestamp.encode('utf-8') + ',' + dayTemp + '\n')
f.close()
也很抱歉,这段代码可能不是 Python 中的正确缩进。我不擅长这个。
更新:所以有人回答了下面的问题,它有效,但我意识到我提取了错误的数据(哎呀)。所以我放了这个:
import codecs
import urllib2
from bs4 import BeautifulSoup
f = codecs.open('wunder-data2.txt', 'w', 'utf-8')
for m in range(1, 13):
for d in range(1, 32):
if (m == 2 and d > 28):
break
elif (m in [4, 6, 9, 11] and d > 30):
break
url = "http://www.wunderground.com/history/airport/KBUF/2009/" + str(m) + "/" + str(d) + "/DailyHistory.html"
page = urllib2.urlopen(url)
soup = BeautifulSoup(page, "html.parser")
dayTemp = soup.findAll(attrs={"class":"wx-value"})[5].span.string
if len(str(m)) < 2:
mStamp = '0' + str(m)
else:
mStamp = str(m)
if len(str(d)) < 2:
dStamp = '0' +str(d)
else:
dStamp = str(d)
timestamp = '2009' + mStamp +dStamp
f.write(timestamp.encode('utf-8') + ',' + dayTemp + '\n')
f.close()
所以我很不确定。我想要做的是数据刮