5

我使用的是 Pandas 0.8.1,目前无法更改版本。如果较新的版本将有助于解决以下问题,请在评论而不是答案中注明。此外,这是针对研究复制项目的,因此即使在仅添加一个新数据点后重新运行回归可能很愚蠢(如果数据集很大),我仍然必须这样做。谢谢!

在 Pandas 中,有一个参数rolling选项,但似乎暗示这需要选择窗口大小或使用整个数据样本作为默认值。我希望以累积的方式使用所有数据。window_typepandas.ols

我正在尝试对pandas.DataFrame按日期排序的 a 进行回归。对于每个索引i,我想使用从最小日期到 index 日期的可用数据进行回归i。所以窗口在每次迭代时有效地增长一个,所有数据都是从最早的观察中累积使用的,并且没有数据从窗口中丢弃。

我编写了一个函数(如下)apply来执行此操作,但速度慢得令人无法接受。相反,有没有办法pandas.ols直接执行这种累积回归?

以下是关于我的数据的更多细节。我有一个pandas.DataFrame包含标识符列、日期列、左侧值列和右侧值列。我想使用groupby基于标识符进行分组,然后对由左侧和右侧变量组成的每个时间段执行累积回归。

这是我可以apply在标识符分组对象上使用的函数:

def cumulative_ols(
                   data_frame, 
                   lhs_column, 
                   rhs_column, 
                   date_column,
                   min_obs=60
                  ):

    beta_dict = {}
    for dt in data_frame[date_column].unique():
        cur_df = data_frame[data_frame[date_column] <= dt]
        obs_count = cur_df[lhs_column].notnull().sum()

        if min_obs <= obs_count:
            beta = pandas.ols(
                              y=cur_df[lhs_column],
                              x=cur_df[rhs_column],
                             ).beta.ix['x']
            ###
        else:
            beta = np.NaN
        ###
        beta_dict[dt] = beta
    ###

    beta_df = pandas.DataFrame(pandas.Series(beta_dict, name="FactorBeta"))
    beta_df.index.name = date_column
    return beta_df
4

1 回答 1

1

根据评论中的建议,我创建了自己的函数,该函数可以用于apply并依赖于cumsum累积所有需要的单独项,以向量方式从 OLS 单变量回归中表达系数。

def cumulative_ols(
                   data_frame,
                   lhs_column,
                   rhs_column,
                   date_column,
                   min_obs=60,
                  ):
    """
    Function to perform a cumulative OLS on a Pandas data frame. It is
    meant to be used with `apply` after grouping the data frame by categories
    and sorting by date, so that the regression below applies to the time
    series of a single category's data and the use of `cumsum` will work    
    appropriately given sorted dates. It is also assumed that the date 
    conventions of the left-hand-side and right-hand-side variables have been 
    arranged by the user to match up with any lagging conventions needed.

    This OLS is implicitly univariate and relies on the simplification to the
    formula:

    Cov(x,y) ~ (1/n)*sum(x*y) - (1/n)*sum(x)*(1/n)*sum(y)
    Var(x)   ~ (1/n)*sum(x^2) - ((1/n)*sum(x))^2
    beta     ~ Cov(x,y) / Var(x)

    and the code makes a further simplification be cancelling one factor 
    of (1/n).

    Notes: one easy improvement is to change the date column to a generic sort
    column since there's no special reason the regressions need to be time-
    series specific.
    """
    data_frame["xy"]         = (data_frame[lhs_column] * data_frame[rhs_column]).fillna(0.0)
    data_frame["x2"]         = (data_frame[rhs_column]**2).fillna(0.0)
    data_frame["yobs"]       = data_frame[lhs_column].notnull().map(int)
    data_frame["xobs"]       = data_frame[rhs_column].notnull().map(int)
    data_frame["cum_yobs"]   = data_frame["yobs"].cumsum()
    data_frame["cum_xobs"]   = data_frame["xobs"].cumsum()
    data_frame["cumsum_xy"]  = data_frame["xy"].cumsum()
    data_frame["cumsum_x2"]  = data_frame["x2"].cumsum()
    data_frame["cumsum_x"]   = data_frame[rhs_column].fillna(0.0).cumsum()
    data_frame["cumsum_y"]   = data_frame[lhs_column].fillna(0.0).cumsum()
    data_frame["cum_cov"]    = data_frame["cumsum_xy"] - (1.0/data_frame["cum_yobs"])*data_frame["cumsum_x"]*data_frame["cumsum_y"]
    data_frame["cum_x_var"]  = data_frame["cumsum_x2"] - (1.0/data_frame["cum_xobs"])*(data_frame["cumsum_x"])**2
    data_frame["FactorBeta"] = data_frame["cum_cov"]/data_frame["cum_x_var"]
    data_frame["FactorBeta"][data_frame["cum_yobs"] < min_obs] = np.NaN
    return data_frame[[date_column, "FactorBeta"]].set_index(date_column)
### End cumulative_ols

我已经在许多测试用例中验证了这与我以前的函数的输出和 NumPy 的linalg.lstsq函数的输出相匹配。我还没有对时间进行完整的基准测试,但有趣的是,在我一直在研究的情况下,它快了大约 50 倍。

于 2013-02-27T13:43:45.400 回答