你好澳大利亚同胞:)
如果我是你,我会使用 requests 和 lxml。我认为该网站正在检查 cookie 和一些标题。Requests 的 session 类存储 cookie,也可以让你传递 headers。lxml 会让你在这里使用 xpath,我认为这会比 BeautifulSoup 的界面更痛苦。
见下文:
>>> import lxml.html
>>> import requests
>>> session = requests.session()
>>> response = session.get("http://www.footywire.com/afl/footy/ft_match_statistics?mid=5634", headers={"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36","Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Referer":"http://www.footywire.com/afl/footy/ft_match_statistics?mid=5634","Cache-Control":"max-age=0"})
>>> tree = lxml.html.fromstring(response.text)
>>> rows = tree.xpath("//table//table//table//table//table//table//tr")
>>> for row in rows:
... row.xpath(".//td//text()")
...
[u'\xa0\xa0', 'Sydney Match Statistics (Sorted by Disposals)', 'Coach: ', 'John Longmire', u'\xa0\xa0']
['Player', 'K', 'HB', 'D', 'M', 'G', 'B', 'T', 'HO', 'I50', 'FF', 'FA', 'DT', 'SC']
['Josh Kennedy', '20', '17', '37', '2', '1', '1', '1', '0', '3', '1', '0', '112', '126']
['Jarrad McVeigh', '23', '11', '34', '1', '0', '0', '2', '0', '5', '1', '1', '100', '116']
... cont...
xpath 查询可能有点脆弱,但你明白了 :)