我正在尝试从http://www.basketball-reference.com/boxscores/201112250DAL.html 获取常规赛季球队所有比赛的数据。我让所有其他数据农业功能正常工作,我遇到的问题是循环刮板。这是我用来获取下一页 URL 的测试代码。我可以使用它来获取一支球队在常规赛期间打过的所有 66 场比赛的数据,但是要通过这种方式进行大量的输入。自动化这个最简单的解决方案是什么?
谢谢!
URL = "http://www.basketball-reference.com/boxscores/201112250DAL.html"
html = urlopen(URL).read()
soup = BeautifulSoup(html)
def getLink(html, soup):
links = soup.findAll('a', attrs={'class': 'bold_text'})
if len(links) == 2:
a = links[0]
a = str(a)
a = a[37:51]
return a
if len(links) == 3:
a = links[1]
a = str(a)
a = a[37:51]
return a
if len(links) == 4:
a = links[3]
a = str(a)
a = a[37:51]
return a
print getLink(html, soup)
URL1 = "http://www.basketball-reference.com/boxscores" + getLink(html, soup) + "html"
print URL1
html1 = urlopen(URL1).read()
soup1 = BeautifulSoup(html1)
print getLink(html1, soup1)