1

在我的 NN 结构中,我想在每一层中使用不同的学习率或优化器,例如 AdaGrad。如何实施?等待您的帮助。谢谢。

4

1 回答 1

0

设置optimizer好后model,模型中的每个参数link都有update_rule属性(例如AdaGradRule在这种情况下),它定义了如何更新此参数。

每个update_rule都有单独的属性,因此您可以为链接中的每个参数hyperparam覆盖这些属性。hyperparam

下面是一个示例代码,

class MLP(chainer.Chain):

    def __init__(self, n_units, n_out):
        super(MLP, self).__init__()
        with self.init_scope():
            # input size of each layer will be inferred when omitted
            self.l1 = L.Linear(n_units)  # n_in -> n_units
            self.l2 = L.Linear(n_units)  # n_units -> n_units
            self.l3 = L.Linear(n_out)  # n_units -> n_out

    def __call__(self, x):
        h1 = F.relu(self.l1(x))
        h2 = F.relu(self.l2(h1))
        return self.l3(h2)

model = MLP(args.unit, 10)
classifier_model = L.Classifier(model)
if args.gpu >= 0:
    chainer.cuda.get_device_from_id(args.gpu).use()  # Make a specified GPU current
    classifier_model.to_gpu()  # Copy the model to the GPU

# Setup an optimizer
optimizer = chainer.optimizers.AdaGrad()
optimizer.setup(classifier_model)

# --- After `optimizer.setup()`, you can modify `hyperparam` of each parameter ---

# 1. Change `update_rule` for specific parameter
#    `l1` is `Linear` link, which has parameter `W` and `b`
classifier_model.predictor.l1.W.update_rule.hyperparam.lr = 0.01

# 2. Change `update_rule` for all parameters (W & b) of one link
for param in classifier_model.predictor.l2.params():
    param.update_rule.hyperparam.lr = 0.01

# --- You can setup trainer module to train the model in the following...
...
于 2017-08-14T00:33:29.153 回答