4

我有以下形式的数据集dropbox 下载(23kb csv)

在某些情况下,数据的采样率从 0Hz 到超过 200Hz 从一秒到一秒不等,提供的数据集中的最高采样率约为每秒 50 个样本。

例如,当采集样本时,它们甚至总是分布在第二个样本中

time                   x
2012-12-06 21:12:40    128.75909883327378
2012-12-06 21:12:40     32.799224301545976
2012-12-06 21:12:40     98.932953779777989
2012-12-06 21:12:43    132.07033814856786
2012-12-06 21:12:43    132.07033814856786
2012-12-06 21:12:43     65.71691352191452
2012-12-06 21:12:44    117.1350194748169
2012-12-06 21:12:45     13.095622561808861
2012-12-06 21:12:47     61.295242676059246
2012-12-06 21:12:48     94.774064119961352
2012-12-06 21:12:49     80.169378222553533
2012-12-06 21:12:49     80.291142695702533
2012-12-06 21:12:49    136.55650749231367
2012-12-06 21:12:49    127.29790925838365

应该

time                        x
2012-12-06 21:12:40 000ms   128.75909883327378
2012-12-06 21:12:40 333ms    32.799224301545976
2012-12-06 21:12:40 666ms    98.932953779777989
2012-12-06 21:12:43 000ms   132.07033814856786
2012-12-06 21:12:43 333ms   132.07033814856786
2012-12-06 21:12:43 666ms    65.71691352191452
2012-12-06 21:12:44 000ms   117.1350194748169
2012-12-06 21:12:45 000ms    13.095622561808861
2012-12-06 21:12:47 000ms    61.295242676059246
2012-12-06 21:12:48 000ms    94.774064119961352
2012-12-06 21:12:49 000ms    80.169378222553533
2012-12-06 21:12:49 250ms    80.291142695702533
2012-12-06 21:12:49 500ms   136.55650749231367
2012-12-06 21:12:49 750ms   127.29790925838365

有没有一种简单的方法来使用 pandas 时间序列重采样功能,或者 numpy 或 scipy 中是否有一些可以工作的东西?

4

2 回答 2

4

我认为没有内置的 pandas 或 numpy 方法/函数可以做到这一点。

但是,我更喜欢使用 python 生成器:

def repeats(lst):
    i_0 = None
    n = -1 # will still work if lst starts with None
    for i in lst:
        if i == i_0:
            n += 1
        else:
            n = 0
        yield n
        i_0 = i
# list(repeats([1,1,1,2,2,3])) == [0,1,2,0,1,0]

然后你可以把这个生成器放到一个 numpy 数组中:

import numpy as np
df['rep'] = np.array(list(repeats(df['time'])))

计算重复次数:

from collections import Counter
count = Counter(df['time'])
df['count'] = df['time'].apply(lambda x: count[x])

并进行计算(这是计算中最昂贵的部分):

df['time2'] = df.apply(lambda row: (row['time'] 
                                 + datetime.timedelta(0, 1) # 1s
                                     * row['rep'] 
                                     / row['count']),
                 axis=1)

注意:要删除计算列,请使用del df['rep']del df['count']

.

完成它的一种“内置”方法可能需要使用shift两次才能完成,但我认为这会有点混乱......

于 2012-12-09T02:15:26.530 回答
2

我发现这是 pandas groupby 机制的一个很好的用例,所以我也想为此提供一个解决方案。我发现它比安迪的解决方案更易读,但实际上并没有那么短。

# First, get your data into a dataframe after having copied 
# it with the mouse into a multi-line string:

import pandas as pd
from StringIO import StringIO

s = """2012-12-06 21:12:40    128.75909883327378
2012-12-06 21:12:40     32.799224301545976
2012-12-06 21:12:40     98.932953779777989
2012-12-06 21:12:43    132.07033814856786
2012-12-06 21:12:43    132.07033814856786
2012-12-06 21:12:43     65.71691352191452
2012-12-06 21:12:44    117.1350194748169
2012-12-06 21:12:45     13.095622561808861
2012-12-06 21:12:47     61.295242676059246
2012-12-06 21:12:48     94.774064119961352
2012-12-06 21:12:49     80.169378222553533
2012-12-06 21:12:49     80.291142695702533
2012-12-06 21:12:49    136.55650749231367
2012-12-06 21:12:49    127.29790925838365"""

sio = StringIO(s)
df = pd.io.parsers.read_csv(sio, parse_dates=[[0,1]], sep='\s*', header=None)
df = df.set_index('0_1')
df.index.name = 'time'
df.columns = ['x']

到目前为止,这只是数据准备,所以如果你想比较解决方案的长度,那就从现在开始吧!;)

# Now, groupby the same time indices:

grouped = df.groupby(df.index)

# Create yourself a second object
from datetime import timedelta
second = timedelta(seconds=1)

# loop over group elements, catch new index parts in list
l = []
for _,group in grouped:
    size = len(group)
    if size == 1:
        # go to pydatetime for later addition, so that list is all in 1 format
        l.append(group.index.to_pydatetime())
    else:
        offsets = [i * second / size for i in range(size)]
        l.append(group.index.to_pydatetime() + offsets)

# exchange index for new index
import numpy as np
df.index = pd.DatetimeIndex(np.concatenate(l))
于 2013-02-28T01:24:51.853 回答