7

我在这里有一个特定的性能问题。我正在处理气象预报时间序列,我将其编译成一个 numpy 2d 数组,这样

  • dim0 = 预测系列开始的时间
  • dim1 = 预测范围,例如。0 到 120 小时

现在,我希望 dim0 具有每小时间隔,但有些来源仅每 N 小时产生一次预测。例如,假设 N=3,dim1 中的时间步长为 M=1 小时。然后我得到类似的东西

12:00  11.2  12.2  14.0  15.0  11.3  12.0
13:00  nan   nan   nan   nan   nan   nan
14:00  nan   nan   nan   nan   nan   nan
15:00  14.7  11.5  12.2  13.0  14.3  15.1

但当然也有 13:00 和 14:00 的信息,因为它可以从 12:00 的预测运行中填写。所以我想结束这样的事情:

12:00  11.2  12.2  14.0  15.0  11.3  12.0
13:00  12.2  14.0  15.0  11.3  12.0  nan
14:00  14.0  15.0  11.3  12.0  nan   nan
15:00  14.7  11.5  12.2  13.0  14.3  15.1

假设 dim0 是 1e4 的顺序,dim1 是 1e2 的顺序,那么到达那里的最快方法是什么?现在我正在逐行进行,但这很慢:

nRows, nCols = dat.shape
if N >= M:
    assert(N % M == 0)  # must have whole numbers
    for i in range(1, nRows):
        k = np.array(np.where(np.isnan(self.dat[i, :])))
        k = k[k < nCols - N]  # do not overstep
        self.dat[i, k] = self.dat[i-1, k+N]

我确定必须有一种更优雅的方式来做到这一点?任何提示将不胜感激。

4

4 回答 4

6

看哪,布尔索引的力量!!!

def shift_nans(arr) :
    while True:
        nan_mask = np.isnan(arr)
        write_mask = nan_mask[1:, :-1]
        read_mask = nan_mask[:-1, 1:]
        write_mask &= ~read_mask
        if not np.any(write_mask):
            return arr
        arr[1:, :-1][write_mask] = arr[:-1, 1:][write_mask]

我认为命名是对正在发生的事情的自我解释。正确进行切片是一件痛苦的事,但它似乎正在起作用:

In [214]: shift_nans_bis(test_data)
Out[214]: 
array([[ 11.2,  12.2,  14. ,  15. ,  11.3,  12. ],
       [ 12.2,  14. ,  15. ,  11.3,  12. ,   nan],
       [ 14. ,  15. ,  11.3,  12. ,   nan,   nan],
       [ 14.7,  11.5,  12.2,  13. ,  14.3,  15.1],
       [ 11.5,  12.2,  13. ,  14.3,  15.1,   nan],
       [ 15.7,  16.5,  17.2,  18. ,  14. ,  12. ]])

对于时间安排:

tmp1 = np.random.uniform(-10, 20, (1e4, 1e2))
nan_idx = np.random.randint(30, 1e4 - 1,1e4)
tmp1[nan_idx] = np.nan
tmp1 = tmp.copy()

import timeit

t1 = timeit.timeit(stmt='shift_nans(tmp)',
                   setup='from __main__ import tmp, shift_nans',
                   number=1)
t2 = timeit.timeit(stmt='shift_time(tmp1)', # Ophion's code
                   setup='from __main__ import tmp1, shift_time',
                   number=1)

In [242]: t1, t2
Out[242]: (0.12696346416487359, 0.3427293070417363)
于 2013-07-26T15:30:49.900 回答
2

使用 对数据进行切片a=yourdata[:,1:]

def shift_time(dat):

    #Find number of required iterations
    check=np.where(np.isnan(dat[:,0])==False)[0]
    maxiters=np.max(np.diff(check))-1

    #No sense in iterations where it just updates nans
    cols=dat.shape[1]
    if cols<maxiters: maxiters=cols-1

    for iters in range(maxiters):
        #Find nans
        col_loc,row_loc=np.where(np.isnan(dat[:,:-1]))

        dat[(col_loc,row_loc)]=dat[(col_loc-1,row_loc+1)]


a=np.array([[11.2,12.2,14.0,15.0,11.3,12.0],
[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan],
[14.7,11.5,12.2,13.0,14.3,15.]])

shift_time(a)
print a

[[ 11.2  12.2  14.   15.   11.3  12. ]
 [ 12.2  14.   15.   11.3  12.    nan]
 [ 14.   15.   11.3  12.    nan   nan]
 [ 14.7  11.5  12.2  13.   14.3  15. ]]

要按原样使用您的数据,或者可以稍微更改以直接获取它,但这似乎是一种清晰的方式来表明这一点:

shift_time(yourdata[:,1:]) #Updates in place, no need to return anything.

使用 tiago 的测试:

tmp = np.random.uniform(-10, 20, (1e4, 1e2))
nan_idx = np.random.randint(30, 1e4 - 1,1e4)
tmp[nan_idx] = np.nan

t=time.time()
shift_time(tmp,maxiter=1E5)
print time.time()-t

0.364198923111 (seconds)

如果你真的很聪明,你应该可以用一个np.where.

于 2013-07-26T14:19:22.433 回答
1

这似乎可以解决问题:

import numpy as np

def shift_time(dat):
    NX, NY = dat.shape
    for i in range(NY):
        x, y = np.where(np.isnan(dat))
        xr = x - 1
        yr = y + 1
        idx = (xr >= 0) & (yr < NY)
        dat[x[idx], y[idx]] = dat[xr[idx], yr[idx]]
    return

现在有一些测试数据:

In [1]: test_data = array([[ 11.2,  12.2,  14. ,  15. ,  11.3,  12. ],
                           [  nan,   nan,   nan,   nan,   nan,   nan],
                           [  nan,   nan,   nan,   nan,   nan,   nan],
                           [ 14.7,  11.5,  12.2,  13. ,  14.3,  15.1],
                           [  nan,   nan,   nan,   nan,   nan,   nan],
                           [ 15.7,  16.5,  17.2,  18. ,  14. ,  12. ]])
In [2]: shift_time(test_data)
In [3]: print test_data
Out [3]: 
array([[ 11.2,  12.2,  14. ,  15. ,  11.3,  12. ],
       [ 12.2,  14. ,  15. ,  11.3,  12. ,   nan],
       [ 14. ,  15. ,  11.3,  12. ,   nan,   nan],
       [ 14.7,  11.5,  12.2,  13. ,  14.3,  15.1],
       [ 11.5,  12.2,  13. ,  14.3,  15.1,   nan],
       [ 15.7,  16.5,  17.2,  18. ,  14. ,  12. ]])

并使用 (1e4, 1e2) 数组进行测试:

In [1]: tmp = np.random.uniform(-10, 20, (1e4, 1e2))
In [2]: nan_idx = np.random.randint(30, 1e4 - 1,1e4)
In [3]: tmp[nan_idx] = nan
In [4]: time test3(tmp)
CPU times: user 1.53 s, sys: 0.06 s, total: 1.59 s
Wall time: 1.59 s
于 2013-07-26T14:28:26.447 回答
0

这个垫,滚,滚组合的每次迭代基本上都可以满足您的需求:

import numpy as np
from numpy import nan as nan

# Startup array
A = np.array([[11.2, 12.2, 14.0, 15.0, 11.3, 12.0],
              [nan,  nan,  nan,  nan,  nan,  nan],
              [nan,  nan,  nan,  nan,  nan,  nan],
              [14.7, 11.5, 12.2, 13.0, 14.3, 15.1]])

def pad_nan(v, pad_width, iaxis, kwargs):
    v[:pad_width[0]]  = nan
    v[-pad_width[1]:] = nan
    return v

def roll_data(A):
    idx = np.isnan(A)
    A[idx] = np.roll(np.roll(np.pad(A,1, pad_nan),1,0), -1, 1)[1:-1,1:-1][idx]
    return A

print A
print roll_data(A)
print roll_data(A)

输出给出:

[[ 11.2  12.2  14.   15.   11.3  12. ]
 [  nan   nan   nan   nan   nan   nan]
 [  nan   nan   nan   nan   nan   nan]
 [ 14.7  11.5  12.2  13.   14.3  15.1]]

[[ 11.2  12.2  14.   15.   11.3  12. ]
 [ 12.2  14.   15.   11.3  12.    nan]
 [  nan   nan   nan   nan   nan   nan]
 [ 14.7  11.5  12.2  13.   14.3  15.1]]

[[ 11.2  12.2  14.   15.   11.3  12. ]
 [ 12.2  14.   15.   11.3  12.    nan]
 [ 14.   15.   11.3  12.    nan   nan]
 [ 14.7  11.5  12.2  13.   14.3  15.1]]

一切都是纯 numpy 的,所以每次迭代都应该非常快。但是,我不确定创建填充数组和运行多次迭代的成本,如果您尝试让我知道结果!

于 2013-07-26T14:23:23.237 回答