拥有一维测量数据,我想知道使用卡尔曼滤波器在每个点的状态标准偏差。我的程序如下:
from pykalman import KalmanFilter
import numpy as np
measurements = np.asarray([2, 1, 3, 6, 3, 2, 7, 3, 4, 4, 5, 1, 10, 3, 1, 5])
kf = KalmanFilter(transition_matrices=[1],
observation_matrices=[1],
initial_state_mean=measurements[0],
initial_state_covariance=1,
observation_covariance=1,
transition_covariance=0.01)
state_means, state_covariances = kf.filter(measurements)
state_std = np.sqrt(state_covariances[:,0])
print state_std
这导致了以下奇怪的结果:
[[ 0.70710678]
[ 0.5811612 ]
[ 0.50795838]
[ 0.4597499 ]
[ 0.42573145]
[ 0.40067908]
[ 0.38170166]
[ 0.36704314]
[ 0.35556214]
[ 0.34647811]
[ 0.33923608]
[ 0.33342945]
[ 0.32875331]
[ 0.32497478]
[ 0.32191347]
[ 0.31942809]]
我预计最后一个数据点的方差会增加。我究竟做错了什么?