使用 Python 2.5,我正在读取三个不同信息的 HTML 文件。我能够找到信息的方法是找到与regex * 的匹配项,然后从匹配行向下计数特定数量的行以获得我正在寻找的实际信息。 问题是我必须重新打开网站 3 次(我正在查找的每条信息一个)。我认为它效率低下,并且希望能够仅查找一次打开该站点的所有三件事。 有没有人有更好的方法或建议?
*我会学习更好的方法,比如 BeautifulSoup,但现在,我需要快速修复
代码:
def scrubdividata(ticker):
try:
f = urllib2.urlopen('http://dividata.com/stock/%s'%(ticker))
lines = f.readlines()
for i in range(0,len(lines)):
line = lines[i]
if "Annual Dividend:" in line:
s = str(lines[i+1])
start = '>\$'
end = '</td>'
AnnualDiv = re.search('%s(.*)%s' % (start, end), s).group(1)
f = urllib2.urlopen('http://dividata.com/stock/%s'%(ticker))
lines = f.readlines()
for i in range(0,len(lines)):
line = lines[i]
if "Last Dividend:" in line:
s = str(lines[i+1])
start = '>\$'
end = '</td>'
LastDiv = re.search('%s(.*)%s' % (start, end), s).group(1)
f = urllib2.urlopen('http://dividata.com/stock/%s'%(ticker))
lines = f.readlines()
for i in range(0,len(lines)):
line = lines[i]
if "Last Ex-Dividend Date:" in line:
s = str(lines[i+1])
start = '>'
end = '</td>'
LastExDivDate = re.search('%s(.*)%s' % (start, end), s).group(1)
divlist.append((ticker,LastDiv,AnnualDiv,LastExDivDate))
except:
if ticker not in errorlist:
errorlist.append(ticker)
else:
pass
pass
谢谢,
乙
我找到了一个可行的解决方案!我删除了两个无关的 urlopen 和 readlines 命令,只留下一个用于循环(之前我只删除了 urlopen 命令,但留下了 readlines)。这是我更正的代码:
def scrubdividata(ticker):
try:
f = urllib2.urlopen('http://dividata.com/stock/%s'%(ticker))
lines = f.readlines()
for i in range(0,len(lines)):
line = lines[i]
if "Annual Dividend:" in line:
s = str(lines[i+1])
start = '>\$'
end = '</td>'
AnnualDiv = re.search('%s(.*)%s' % (start, end), s).group(1)
#f = urllib2.urlopen('http://dividata.com/stock/%s'%(ticker))
#lines = f.readlines()
for i in range(0,len(lines)):
line = lines[i]
if "Last Dividend:" in line:
s = str(lines[i+1])
start = '>\$'
end = '</td>'
LastDiv = re.search('%s(.*)%s' % (start, end), s).group(1)
#f = urllib2.urlopen('http://dividata.com/stock/%s'%(ticker))
#lines = f.readlines()
for i in range(0,len(lines)):
line = lines[i]
if "Last Ex-Dividend Date:" in line:
s = str(lines[i+1])
start = '>'
end = '</td>'
LastExDivDate = re.search('%s(.*)%s' % (start, end), s).group(1)
divlist.append((ticker,LastDiv,AnnualDiv,LastExDivDate))
print '@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
print ticker,LastDiv,AnnualDiv,LastExDivDate
print '@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'
except:
if ticker not in errorlist:
errorlist.append(ticker)
else:
pass
pass