0

我目前正在阅读神经网络和深度学习,但我遇到了一个问题。问题是更新他提供的代码以使用 L1 正则化而不是 L2 正则化。

使用 L2 正则化的原始代码是:

def update_mini_batch(self, mini_batch, eta, lmbda, n):
    """Update the network's weights and biases by applying gradient
    descent using backpropagation to a single mini batch.  The
    ``mini_batch`` is a list of tuples ``(x, y)``, ``eta`` is the
    learning rate, ``lmbda`` is the regularization parameter, and
    ``n`` is the total size of the training data set.

    """
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    for x, y in mini_batch:
        delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
        nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
    self.weights = [(1-eta*(lmbda/n))*w-(eta/len(mini_batch))*nw
                    for w, nw in zip(self.weights, nabla_w)]
    self.biases = [b-(eta/len(mini_batch))*nb
                   for b, nb in zip(self.biases, nabla_b)]

可以看出它self.weights是使用 L2 正则化项更新的。对于 L1 正则化,我相信我只需要更新同一行来反映

l1 正则化的权重更新

书中说我们可以估计

在此处输入图像描述

使用小批量平均值的术语。这对我来说是一个令人困惑的陈述,但我认为这意味着每个小批量都使用nabla_w每一层的平均值。这导致我对代码进行了以下编辑:

def update_mini_batch(self, mini_batch, eta, lmbda, n):
    """Update the network's weights and biases by applying gradient
    descent using backpropagation to a single mini batch.  The
    ``mini_batch`` is a list of tuples ``(x, y)``, ``eta`` is the
    learning rate, ``lmbda`` is the regularization parameter, and
    ``n`` is the total size of the training data set.

    """
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    for x, y in mini_batch:
        delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
        nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
    avg_nw = [np.array([[np.average(layer)] * len(layer[0])] * len(layer))
              for layer in nabla_w]
    self.weights = [(1-eta*(lmbda/n))*w-(eta)*nw
                    for w, nw in zip(self.weights, avg_nw)]
    self.biases = [b-(eta/len(mini_batch))*nb
                   for b, nb in zip(self.biases, nabla_b)]

但我得到的结果几乎只是噪声,准确率约为 10%。我是在解释语句错误还是我的代码错误?任何提示将不胜感激。

4

1 回答 1

1

这是不正确的。

从概念上讲, L2正则化是说我们将在每次训练迭代后以几何方式将W 缩小一些衰减。这样,如果 W 变得非常大,它将缩小更多。这可以防止 W 中的各个值变得太大。

从概念上讲, L1正则化是说我们将在每次训练迭代后将 W线性减少某个常数(不要越过零。正数减少到零但不低于。负数增加到零但不高于。)这个零去掉非常小的 W 值,只留下做出重大贡献的值。

你的第二个等式

self.weights = [(1-eta*(lmbda/n))*w-(eta)*nw
                for w, nw in zip(self.weights, avg_nw)]

不实现原始减法,但在(1-eta*(lmbda/n))*w中仍然具有乘法(几何缩放) 。

实现一些函数reduceLinearlyToZero,它接受 w 和 eta*(lmbda/n) 并返回 max( abs( w - eta*(lmbda/n) ) , 0 ) * ( 1.0 if w >= 0 else -1.0 )

def reduceLinearlyToZero(w,eta,lmbda,n) :
    return max( abs( w - eta*(lmbda/n) ) , 0 ) * ( 1.0 if w >= 0 else -1.0 )


self.weights = [ reduceLinearlyToZero(w,eta,lmbda,n)-(eta/len(mini_batch))*nw
                for w, nw in zip(self.weights, avg_nw)]

或者可能

self.weights = [ reduceLinearlyToZero(w-(eta/len(mini_batch))*nw,eta,lmbda,n)
                for w, nw in zip(self.weights, avg_nw)]
于 2017-06-20T02:32:57.927 回答