4

我有一个 pandas DateFrame, df 我用它创建的

df = pd.read_table('sorted_df_changes.txt', index_col=0, parse_dates=True, names=['date', 'rev_id', 'score'])

它的结构如下:

                     page_id     score  
date
2001-05-23 19:50:14  2430        7.632989
2001-05-25 11:53:55  1814033     18.946234
2001-05-27 17:36:37  2115        3.398154
2001-08-04 21:00:51  311         19.386016
2001-08-04 21:07:42  314         14.886722

date 是索引并且是 DatetimeIndex 类型。

每个 page_id 可能出现在一个或多个日期中(不是唯一的),并且大小约为 100 万。所有页面共同构成了文档

我需要在日期的每个时间获得整个文档的分数,同时只计算任何给定 page_id 的最新分数。

例子

示例数据

                     page_id     score  
date
2001-05-23 19:50:14  1           3
2001-05-25 11:53:55  2           4
2001-05-27 17:36:37  1           5
2001-05-28 19:36:37  1           1

示例解决方案

                     score  
date
2001-05-23 19:50:14  3
2001-05-25 11:53:55  7 (3 + 4)
2001-05-27 17:36:37  9 (5 + 4)
2001-05-28 19:36:37  5 (1 + 4)

2 的条目被连续计数,因为它没有重复,但每次重复 id 1 时,新分数都会替换旧分数。

4

4 回答 4

3

编辑

最后,我找到了一个不需要for循环的解决方案:

df.score.groupby(df.page_id).transform(lambda s:s.diff().combine_first(s)).cumsum()

我认为需要一个 for 循环:

from StringIO import StringIO
txt = """date,page_id,score
2001-05-23 19:50:14,  1,3
2001-05-25 11:53:55,  2,4
2001-05-27 17:36:37,  1,5
2001-05-28 19:36:37,  1,1
2001-05-28 19:36:38,  3,6
2001-05-28 19:36:39,  3,9
"""

df = pd.read_csv(StringIO(txt), index_col=0)

def score_sum_py(page_id, scores):
    from itertools import izip
    score_sum = 0
    last_score = [0]*(np.max(page_id)+1)
    result = np.empty_like(scores)
    for i, (pid, score) in enumerate(izip(page_id, scores)):
        score_sum = score_sum - last_score[pid] + score
        last_score[pid] = score
        result[i] = score_sum
    result.name = "score_sum"
    return result

print score_sum_py(pd.factorize(df.page_id)[0], df.score)

输出:

date
2001-05-23 19:50:14     3
2001-05-25 11:53:55     7
2001-05-27 17:36:37     9
2001-05-28 19:36:37     5
2001-05-28 19:36:38    11
2001-05-28 19:36:39    14
Name: score_sum

如果python中的循环很慢,你可以尝试将两个系列的page_id,scores先转换为python列表,循环列表并使用python的本机整数计算可能更快。

如果速度很重要,你也可以试试 cython:

%%cython
cimport cython
cimport numpy as np
import numpy as np

@cython.wraparound(False) 
@cython.boundscheck(False)
def score_sum(np.ndarray[int] page_id, np.ndarray[long long] scores):
    cdef int i
    cdef long long score_sum, pid, score
    cdef np.ndarray[long long] last_score, result

    score_sum = 0
    last_score = np.zeros(np.max(page_id)+1, dtype=np.int64)
    result = np.empty_like(scores)

    for i in range(len(page_id)):
        pid = page_id[i]
        score = scores[i]
        score_sum = score_sum - last_score[pid] + score
        last_score[pid] = score
        result[i] = score_sum

    result.name = "score_sum"
    return result

在这里,我使用pandas.factorize()将 转换为page_id范围为 0 和 N 的数组。其中 N 是 中元素的唯一计数page_id。您还可以使用 dict 来缓存每个 page_id 的 last_score 而无需使用pandas.factorize().

于 2013-04-05T08:44:31.933 回答
2

另一种数据结构使这个计算更容易推理,性能不会像其他答案那么好,但我认为值得一提(主要是因为它使用了我最喜欢的 pandas 函数......)

In [11]: scores = pd.get_dummies(df['page_id']).mul(df['score'], axis=0).where(x!=0, np.nan)

In [12]: scores
Out[12]: 
                      1   2   3
date                           
2001-05-23 19:50:14   3 NaN NaN
2001-05-25 11:53:55 NaN   4 NaN
2001-05-27 17:36:37   5 NaN NaN
2001-05-28 19:36:37   1 NaN NaN
2001-05-28 19:36:38 NaN NaN   6
2001-05-28 19:36:39 NaN NaN   9

In [13]: scores.ffill()
Out[13]: 
                     1   2   3
date                          
2001-05-23 19:50:14  3 NaN NaN
2001-05-25 11:53:55  3   4 NaN
2001-05-27 17:36:37  5   4 NaN
2001-05-28 19:36:37  1   4 NaN
2001-05-28 19:36:38  1   4   6
2001-05-28 19:36:39  1   4   9

In [14]: scores.ffill().sum(axis=1)
Out[14]: 
date
2001-05-23 19:50:14     3
2001-05-25 11:53:55     7
2001-05-27 17:36:37     9
2001-05-28 19:36:37     5
2001-05-28 19:36:38    11
2001-05-28 19:36:39    14
于 2013-04-05T12:06:26.787 回答
1

这是你想要的吗?但我认为这是一个愚蠢的解决方案。

In [164]: df['result'] = [df[:i+1].groupby('page_id').last().sum()[0] for i in range(len(df))]

In [165]: df
Out[165]: 
                     page_id  score  result
date                                       
2001-05-23 19:50:14        1      3       3
2001-05-25 11:53:55        2      4       7
2001-05-27 17:36:37        1      5       9
2001-05-28 19:36:37        1      1       5
于 2013-04-05T06:45:49.150 回答
0

这是我使用标准库放在一起的临时解决方案。我希望看到一个使用 pandas 的优雅高效的解决方案。

import csv
from collections import defaultdict

page_scores = defaultdict(lambda: 0)
date_scores = [] # [(date, score)]

def get_and_update_score_diff(page_id, new_score):
    diff = new_score - page_scores[page_id]
    page_scores[page_id] = new_score
    return diff

# Note: there are some duplicate dates and the file is sorted by date.
# Format: 2001-05-23T19:50:14Z, 2430, 7.632989
with open('sorted_df_changes.txt') as f:
    reader = csv.reader(f, delimiter='\t')

    first = reader.next()
    date_string, page_id, score = first[0], first[1], float(first[2])
    page_scores[page_id] = score
    date_scores.append((date_string, score))

    for date_string, page_id, score in reader:
        score = float(score)
        score_diff = get_and_update_score_diff(page_id, score)
        if date_scores[-1][0] == date_string:
            date_scores[-1] = (date_string, date_scores[-1][1] + score_diff)
        else:
            date_scores.append((date_string, date_scores[-1][1] + score_diff))
于 2013-04-05T08:45:58.140 回答