我已经定义了一个这样的“剪裁”分布:
from pymc3.distributions.transforms import ElemwiseTransform
import aesara.tensor as at
import numpy as np
class MvClippingTransform(ElemwiseTransform):
name = "MvClippingTransform"
def __init__(self, lower = None, upper = None):
if lower is None:
lower = float("-inf")
if upper is None:
upper = float("inf")
self.lower = lower
self.upper = upper
def backward(self, x):
return x
def forward(self, x):
return at.clip(x, self.lower, self.upper)
def forward_val(self, x, point=None):
return np.clip(x, self.lower, self.upper)
def jacobian_det(self, x):
# The backwards transformation of clipping as I've defined it is the identity function (perhaps that will change)
# I have an intuition that the jacobian determinant of the identity function is 1, so log(abs(1)) -> 0
return at.zeros(x.shape)
我已经将它应用于具有 LKJ Cholesky 之前的 MvNormal,如下所示:
import importlib, clipping; importlib.reload(clipping)
with pm.Model() as m:
# Taken from https://docs.pymc.io/pymc-examples/examples/case_studies/LKJ.html
chol, corr, stds = pm.LKJCholeskyCov(
# Specifying compute_corr=True also unpacks the cholesky matrix in the returns (otherwise we'd have to unpack ourselves)
"chol", n=3, eta=2.0, sd_dist=pm.Exponential.dist(1.0), compute_corr=True
)
cov = pm.Deterministic("cov", chol.dot(chol.T))
μ = pm.Uniform("μ", -10, 10, shape=3, testval=samples.mean(axis=0))
clipping = clipping.MvClippingTransform(lower = None, upper = upper_truncation)
mv = pm.MvNormal("mv", mu = μ, chol=chol, shape = 3, transform = clipping, observed=samples) # , observed = samples
trace = pm.sample(random_seed=44, init="adapt_diag", return_inferencedata=True, target_accept = 0.9)
ppc = pm.sample_posterior_predictive(
trace, var_names=["mv"], random_seed=42
)
(上截断是一个 numpy 数组)
现在,我通过为多元正态分布定义协方差矩阵并对其应用裁剪来生成模拟数据,以获得以下结果:
但是当我从 PPC 取样时,我得到了这个:
即使我将剪辑定义为 [0,0,0],它仍然不起作用。
为什么 PPC(或参数采样,就此而言)不反映裁剪转换?