1

我使用 pyro-ppl 3.0 进行概率编程。当我阅读有关贝叶斯回归的教程时。我使用 AutoGuide 和 pyro.random_module 将正常的前馈网络传输到贝叶斯网络。

# linear regression
class RegressionModel(nn.Module):
    def __init__(self, p):
        # p number of feature
        super(RegressionModel, self).__init__()
        self.linear1 = nn.Linear(p, 2)
        self.linear2 = nn.Linear(2, 1)
        self.softplus = nn.Softplus()

    def forward(self, x):
        x = self.softplus(self.linear1(x))
        return self.linear2(x)

# model
def model(x_data, y_data):
    # weight and bias prior
    w1_prior = Normal(torch.zeros(2,2), torch.ones(2,2)).to_event(2)
    b1_prior = Normal(torch.ones(2)*8, torch.ones(2)*1000).to_event(1)
    w2_prior = Normal(torch.zeros(1,2), torch.ones(1,2)).to_event(1)
    b2_prior = Normal(torch.ones(1)*3, torch.ones(1)*500).to_event(1)

    priors = {'linear1.weight': w1_prior, 'linear1.bias': b1_prior,
             'linear2.weight': w2_prior, 'linear2.bias': b2_prior}

    scale = pyro.sample("sigma", Uniform(0., 10.))

    # lift module parameters to random variables sampled from the priors
    lifted_module = pyro.random_module("module", regression_model, priors)
    # sample a nn (which also samples w and b)
    lifted_reg_model = lifted_module()
    with pyro.plate("map", len(x_data)):
        # run the nn forward on data
        prediction_mean = lifted_reg_model(x_data).squeeze(-1)
        # condition on the observed data
        pyro.sample("obs",
                    Normal(prediction_mean, scale),
                    obs=y_data)
        return prediction_mean

guide = AutoDiagonalNormal(model)

#================

#================

# inference
optim = Adam({"lr": 0.03})
svi = SVI(model, guide, optim, loss=Trace_ELBO(), num_samples=1000)

def train():
    pyro.clear_param_store()
    for j in range(num_iterations):
        # calculate the loss and take a gradient step
        loss = svi.step(x_data, y_data)
        if j % 100 == 0:
            print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data)))

train()

for name, value in pyro.get_param_store().items():
    print(name, pyro.param(name))

结果显示如下: auto_loc tensor([-2.1585, -0.9799, -0.0378, -0.5000, -1.0241, 2.6091, -1.3760, 1.6920, 0.2553, 4.5768], requires_grad=True) auto_scale tensor([0.1432, 0.1017, 0.0368, 0.7588, 0.4160, 0.0624, 0.6657, 0.0431, 0.2972, 0.0901], grad_fn=)

潜在变量的数量自动设置为 10。我想更改数字。如教程中所述,我添加

##
latent_dim = 5
pyro.param("auto_loc", torch.randn(latent_dim))
pyro.param("auto_scale", torch.ones(latent_dim),
           constraint=constraints.positive)

在上面提到的#============= 之间。

但结果还是一样。数字不变。那么,如何设置 AutoDiagnalNormal 函数来更改潜在变量的数量

4

1 回答 1

0

我认为你只需要pyro.clear_param_store()在训练设置之间切换。我相信发生的事情是您正在使用 进行训练latent_dim=5,然后当您设置latent_dim=10旧参数时仍在 Pyro 的全局参数存储中。请注意,语句的torch.randn(latent_dim)参数pyro.param()仅用于初始化,如果参数已经初始化(并且在全局参数存储中找到),则忽略该参数。

于 2020-10-15T11:30:50.670 回答