请注意:此问题已在下面成功回答 ptrj。我还在我的博客上写了一篇关于我使用 zipline 的经验的博文,你可以在这里找到:https ://financialzipline.wordpress.com
我在南非,我正在尝试将南非股票加载到数据框中,以便它将股价信息提供给 zipline。假设我正在查看在JSE(约翰内斯堡证券交易所)上市的AdCorp Holdings Limited :
谷歌财经给了我历史价格信息:
https://www.google.com/finance/historical?q=JSE%3AADR&ei=5G6OV4ibBIi8UcP-nfgB
雅虎财经没有关于该公司的信息。
https://finance.yahoo.com/quote/adcorp?ltr=1
在 iPython Notebook 中输入以下代码可以让我获取来自 Google Finance 的信息的数据框:
start = datetime.datetime(2016,7,1)
end = datetime.datetime(2016,7,18)
f = web.DataReader('JSE:ADR', 'google',start,end)
如果我显示 f,我会看到该信息实际上也对应于 Google 财经的信息:
这是与 Google Finance 完全一致的价格,您可以在 Google Finance 网站上看到 2016-07-18 的信息与我的数据框完全匹配。
但是,我不确定如何加载此数据框,以便 zipline 可以将其用作数据包。
如果您查看给出的示例buyapple.py
,您可以看到它只是从摄取的数据包中提取苹果股票 (APPL) 的数据quantopian-quandl
。这里的挑战是替换APPL
为,JSE:ADR
以便它JSE:ADR
每天从数据框而不是数据包中订购 10 股,quantopian-quandl
并将其绘制在图表上。
有谁知道如何做到这一点?网上几乎没有例子可以解决这个问题......
这是buyapple.py
zipline 的示例文件夹中提供的代码:
from zipline.api import order, record, symbol
def initialize(context):
pass
def handle_data(context, data):
order(symbol('AAPL'), 10)
record(AAPL=data.current(symbol('AAPL'), 'price'))
# Note: this function can be removed if running
# this algorithm on quantopian.com
def analyze(context=None, results=None):
import matplotlib.pyplot as plt
# Plot the portfolio and asset data.
ax1 = plt.subplot(211)
results.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('Portfolio value (USD)')
ax2 = plt.subplot(212, sharex=ax1)
results.AAPL.plot(ax=ax2)
ax2.set_ylabel('AAPL price (USD)')
# Show the plot.
plt.gcf().set_size_inches(18, 8)
plt.show()
def _test_args():
"""Extra arguments to use when zipline's automated tests run this example.
"""
import pandas as pd
return {
'start': pd.Timestamp('2014-01-01', tz='utc'),
'end': pd.Timestamp('2014-11-01', tz='utc'),
}
编辑:
我查看了从 Yahoo Finance 提取数据的代码,并对其进行了一些修改,以使其采用 Google Finance 数据。雅虎财经的代码可以在这里找到:http ://www.zipline.io/_modules/zipline/data/bundles/yahoo.html 。
这是我获取 Google Finance 的代码 - 遗憾的是它不起作用。更流利的python可以帮助我吗?:
import os
import numpy as np
import pandas as pd
from pandas_datareader.data import DataReader
import requests
from zipline.utils.cli import maybe_show_progress
def _cachpath(symbol, type_):
return '-'.join((symbol.replace(os.path.sep, '_'), type_))
def google_equities(symbols, start=None, end=None):
"""Create a data bundle ingest function from a set of symbols loaded from
yahoo.
Parameters
----------
symbols : iterable[str]
The ticker symbols to load data for.
start : datetime, optional
The start date to query for. By default this pulls the full history
for the calendar.
end : datetime, optional
The end date to query for. By default this pulls the full history
for the calendar.
Returns
-------
ingest : callable
The bundle ingest function for the given set of symbols.
Examples
--------
This code should be added to ~/.zipline/extension.py
.. code-block:: python
from zipline.data.bundles import yahoo_equities, register
symbols = (
'AAPL',
'IBM',
'MSFT',
)
register('my_bundle', yahoo_equities(symbols))
Notes
-----
The sids for each symbol will be the index into the symbols sequence.
"""
# strict this in memory so that we can reiterate over it
symbols = tuple(symbols)
def ingest(environ,
asset_db_writer,
minute_bar_writer, # unused
daily_bar_writer,
adjustment_writer,
calendar,
cache,
show_progress,
output_dir,
# pass these as defaults to make them 'nonlocal' in py2
start=start,
end=end):
if start is None:
start = calendar[0]
if end is None:
end = None
metadata = pd.DataFrame(np.empty(len(symbols), dtype=[
('start_date', 'datetime64[ns]'),
('end_date', 'datetime64[ns]'),
('auto_close_date', 'datetime64[ns]'),
('symbol', 'object'),
]))
def _pricing_iter():
sid = 0
with maybe_show_progress(
symbols,
show_progress,
label='Downloading Google pricing data: ') as it, \
requests.Session() as session:
for symbol in it:
path = _cachpath(symbol, 'ohlcv')
try:
df = cache[path]
except KeyError:
df = cache[path] = DataReader(
symbol,
'google',
start,
end,
session=session,
).sort_index()
# the start date is the date of the first trade and
# the end date is the date of the last trade
start_date = df.index[0]
end_date = df.index[-1]
# The auto_close date is the day after the last trade.
ac_date = end_date + pd.Timedelta(days=1)
metadata.iloc[sid] = start_date, end_date, ac_date, symbol
df.rename(
columns={
'Open': 'open',
'High': 'high',
'Low': 'low',
'Close': 'close',
'Volume': 'volume',
},
inplace=True,
)
yield sid, df
sid += 1
daily_bar_writer.write(_pricing_iter(), show_progress=True)
symbol_map = pd.Series(metadata.symbol.index, metadata.symbol)
asset_db_writer.write(equities=metadata)
adjustment_writer.write(splits=pd.DataFrame(), dividends=pd.DataFrame())
# adjustments = []
# with maybe_show_progress(
# symbols,
# show_progress,
# label='Downloading Google adjustment data: ') as it, \
# requests.Session() as session:
# for symbol in it:
# path = _cachpath(symbol, 'adjustment')
# try:
# df = cache[path]
# except KeyError:
# df = cache[path] = DataReader(
# symbol,
# 'google-actions',
# start,
# end,
# session=session,
# ).sort_index()
# df['sid'] = symbol_map[symbol]
# adjustments.append(df)
# adj_df = pd.concat(adjustments)
# adj_df.index.name = 'date'
# adj_df.reset_index(inplace=True)
# splits = adj_df[adj_df.action == 'SPLIT']
# splits = splits.rename(
# columns={'value': 'ratio', 'date': 'effective_date'},
# )
# splits.drop('action', axis=1, inplace=True)
# dividends = adj_df[adj_df.action == 'DIVIDEND']
# dividends = dividends.rename(
# columns={'value': 'amount', 'date': 'ex_date'},
# )
# dividends.drop('action', axis=1, inplace=True)
# # we do not have this data in the yahoo dataset
# dividends['record_date'] = pd.NaT
# dividends['declared_date'] = pd.NaT
# dividends['pay_date'] = pd.NaT
# adjustment_writer.write(splits=splits, dividends=dividends)
return ingest