data 是一个包含 2500 个时间序列的矩阵。我需要随着时间的推移对每个时间序列进行平均,丢弃围绕峰值记录的数据点(在间隔 tspike-dt*10...tspike+10*dt 中)。每个神经元的尖峰时间数量是可变的,并存储在一个包含 2500 个条目的字典中。我当前的代码迭代神经元和尖峰时间并将掩码值设置为 NaN。然后调用bottleneck.nanmean()。但是,此代码在当前版本中速度较慢,我想知道是否有更快的解决方案。谢谢!
import bottleneck
import numpy as np
from numpy.random import rand, randint
t = 1
dt = 1e-4
N = 2500
dtbin = 10*dt
data = np.float32(ones((N, t/dt)))
times = np.arange(0,t,dt)
spiketimes = dict.fromkeys(np.arange(N))
for key in spiketimes:
spiketimes[key] = rand(randint(100))
means = np.empty(N)
for i in range(N):
spike_times = spiketimes[i]
datarow = data[i]
if len(spike_times) > 0:
for spike_time in spike_times:
start=max(spike_time-dtbin,0)
end=min(spike_time+dtbin,t)
idx = np.all([times>=start,times<=end],0)
datarow[idx] = np.NaN
means[i] = bottleneck.nanmean(datarow)