2

我正在尝试对AR(1) 过程进行建模,该过程以一定的概率生成二元观察。

我已经实现了一个粒子过滤器(顺序重要性重采样),但它的行为似乎很奇怪。


定义一些辅助函数后:

sigmoid2 = lambda x: 1/(1+np.exp(-x))
bern_likelihood = lambda x,y: ( (sigmoid2(x))**y ) * ( (1-sigmoid2(x))**(1-y) )
normal = lambda x, mu, sigma: (1/(sigma*np.sqrt(2*pi)))*np.exp((-(1/2)*((x-mu)**2))/(sigma**2))
newVal = lambda nseT:  np.random.normal(loc=0,scale=nseT,size=nParticles)

我生成潜在变量:

thetaPath = []
x = 0​
AR_noise = .1
​ ​
nTrials = 800
for i in range(nTrials):
    x += np.random.normal(loc=0,scale=AR_noise)
    thetaPath.append(x)

thetaPath = np.array(thetaPath)
​

它通过 sigmoid 链接生成观察到的二进制数据:

frac_corr = sigmoid2(thetaPath)
trials = np.random.random(size=nTrials)<frac_corr

然后这是我的粒子过滤器代码:

stL = 0
nParticles = 500

#AR_noise = 0.1
dummy = arange(nParticles)

x0 = np.random.normal(loc=0,scale=AR_noise,size=nParticles)   #initialise a set of particles
w0_unNorm = bern_likelihood(x0,trials[0])    #calculate their weights
w0 = w0_unNorm/np.sum(w0_unNorm)    #normalise their weights
xi = cp.deepcopy(x0)
wi = cp.deepcopy(w0)
particle_pths = np.zeros([nParticles,nTrials])
particle_pths[:,0] = cp.deepcopy(x0)
for i in range(1,nTrials):


    #This part does resampling
    if (1/(np.sum(wi**2)))<(nParticles/2):
        #sample with replacement
        resamp =np.random.choice(dummy,size=nParticles,replace=True,p=wi)

        stL += 1   #simple counter for number of times that resampled
        xi = cp.deepcopy(xi[resamp])
        wi = [1/nParticles]*nParticles


    oldxi = xi
    xi += newVal(AR_noise*2)
    wi *= bern_likelihood(xi,trials[i])*normal(xi,oldxi,AR_noise)/normal(xi,oldxi,AR_noise*2)
    wi /= np.sum(wi)

   particle_pths[:,i] = xi

在这里,我的建议被定义为

normal(xi,oldxi,AR_noise*2)

虽然实际模型是

normal(xi,oldxi,AR_noise)

我看到粒子数量多和少的情况是,对基础状态的估计是离散的和跳跃的。两幅图像的加权平均值估计值: 在此处输入图像描述

当我不重新采样时,我看不到这个: 在此处输入图像描述


这种行为可能是什么原因?是预期的还是代码中有错误?

(安装相关模块后,提供的代码应该开箱即用)

4

0 回答 0