所以它的工作方式,比我的工作方式好得多,但仍然存在一些问题。我发布了完整的脚本,所以你可以看到我在做什么。我会花一些时间和精力来调查这些问题,但这无论如何都会帮助我更好地学习 python 和 beautifulsoup。
"""
This program imports a list of stock ticker symbols from "ca_stocks.txt"
It then goes to the Globe website and gets current company stock data
It then writes this data to a file to a CSV file in the form
index, ticker, date&time, dimension, measure
"""
import urllib2
import csv, os
import datetime
import re #regular expressions library
import bs4
#from bs4 import BeautifulStoneSoup as bss
#from time import gmtime, strftime
#from lxml import etree
import pyquery
#import dataextract as tde
os.chdir('D:\\02 - \\003 INVESTMENTS\\Yahoo Finance Data')
symbolfile = open('ca_stocks2.txt')
symbolslist = symbolfile.read().split('\n')
def pairs(l,n):
# l = list
# n = number
return zip(*[l[i::n] for i in range(n)])
def main():
i=0
while i<len(symbolslist):
print symbolslist[i]
url = urllib2.urlopen("http://www.theglobeandmail.com/globe-investor/markets/stocks/summary/?q=" +symbolslist[i])
root = bs4.BeautifulSoup(url)
[span.text for span in root("li.clearfix > span")]
[(span.text, span.findNextSibling('span').text) for span in root.select("li.clearfix > span.label")]
dims = [[]] *40
mess = [[]] *40
j=0
for span in root.select("li.clearfix > span.label"):
#print "%s\t%s" % ( span.text, span.findNextSibling('span').text)
dims[j] = span.text
mess[j] = span.findNextSibling('span').text
j+=1
nowtime = datetime.datetime.now().isoformat()
with open('globecdndata.csv','ab') as f:
fw = csv.writer(f, dialect='excel')
for s in range(0,37):
csvRow = s, symbolslist[i], nowtime, dims[s], mess[s]
print csvRow
fw.writerow(csvRow)
f.close()
i+=1
if __name__ == "__main__":
main()
我知道这是丑陋的代码,但是,嘿,我正在学习。CSV 的输出现在如下所示:
(4, 'TT', '2013-11-09T19:32:32.416000', u'Bidx0', u'36.88')
(5, 'TT', '2013-11-09T19:32:32.416000', u'Askx0', u'36.93')
(6, 'TT', '2013-11-09T19:32:32.416000', u'52-week High05/22', u'37.94')
每次价格突破新高或新低时,日期“05/22”都会改变。这对于维度(字段)的名称并不理想。
(7, 'TT', '2013-11-09T19:32:32.416000', u'52-week Low06/27', u'29.52')
(35, 'TT', '2013-11-09T19:32:32.416000', u'Top 1000 Ranking:', u'Profit: 28Revenue: 34Assets: 36')
出于某种原因,它将这些维度(字段)和度量(数据)集中在一起。唔...
这是一些问题的列表。但是,就像我说的,我现在应该能够弄清楚这一点。多多学习,谢谢。有人知道他们在做什么,提供一些输入是很棒的。