8

我正在尝试为用 Numpy 编写的神经网络实现 softmax 函数。令h为给定信号i的 softmax 值。

softmax 函数

我一直在努力实现 softmax 激活函数的偏导数。

softmax 偏导数

我目前遇到的问题是,随着训练的进行,所有偏导数都接近 0。我用这个出色的答案交叉引用了我的数学,但我的数学似乎没有成功。

import numpy as np
def softmax_function( signal, derivative=False ):
    # Calculate activation signal
    e_x = np.exp( signal )
    signal = e_x / np.sum( e_x, axis = 1, keepdims = True )

    if derivative:
        # Return the partial derivation of the activation function
        return np.multiply( signal, 1 - signal ) + sum(
            # handle the off-diagonal values
            - signal * np.roll( signal, i, axis = 1 )
            for i in xrange(1, signal.shape[1] )
        )
    else:
        # Return the activation signal
        return signal
#end activation function

signal参数包含发送到激活函数的输入信号,形状为(n_samples, n_features)

# sample signal (3 samples, 3 features)
signal = [[0.3394572666491664, 0.3089068053925853, 0.3516359279582483], [0.33932706934615525, 0.3094755563319447, 0.3511973743219001], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256]]

以下代码片段是一个完全有效的激活函数,仅作为参考和证明(主要是为了我自己),概念性想法确实有效。

from scipy.special import expit
import numpy as np
def sigmoid_function( signal, derivative=False ):
    # Prevent overflow.
    signal = np.clip( signal, -500, 500 )

    # Calculate activation signal
    signal = expit( signal )

    if derivative:
        # Return the partial derivation of the activation function
        return np.multiply(signal, 1 - signal)
    else:
        # Return the activation signal
        return signal
#end activation function

编辑

  • 这个问题直观地存在于简单的单层网络中。softmax(及其导数)应用于最后一层。
4

1 回答 1

14

这是关于如何以更矢量化的 numpy 方式计算 softmax 函数的导数的答案。然而,偏导数趋近于零的事实可能不是数学问题,而只是学习率问题或复杂深度神经网络已知的死亡权重问题。ReLU等层有助于防止后一个问题。


首先,我使用了以下信号(只是复制了您的最后一个条目),4 samples x 3 features以便更容易看到尺寸发生了什么。

>>> signal = [[0.3394572666491664, 0.3089068053925853, 0.3516359279582483], [0.33932706934615525, 0.3094755563319447, 0.3511973743219001], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256]]
>>> signal.shape
(4, 3)

接下来,您要计算 softmax 函数的雅可比矩阵。根据引用的页面,它被定义-hi * hj为非对角线条目(矩阵的大部分n_features > 2),所以让我们从那里开始。在 numpy 中,您可以使用广播有效地计算雅可比矩阵:

>>> J = - signal[..., None] * signal[:, None, :]
>>> J.shape
(4, 3, 3)

第一个signal[..., None](相当于signal[:, :, None])将信号重塑为 ,(4, 3, 1)而第二个signal[:, None, :]将信号重塑为(4, 1, 3)。然后,将*两个矩阵按元素相乘。Numpy 的内部广播重复两个矩阵以形成n_features x n_features每个样本的矩阵。

然后,我们需要修复对角线元素:

>>> iy, ix = np.diag_indices_from(J[0])
>>> J[:, iy, ix] = signal * (1. - signal)

以上行提取n_features x n_features矩阵的对角线索引。相当于做iy = np.arange(n_features); ix = np.arange(n_features)。然后,用您的定义替换对角线条目hi * (1 - hi)

最后,根据链接源,您需要对每个样本的行求和。可以这样做:

>>> J = J.sum(axis=1)
>>> J.shape
(4, 3)

在下面找到一个总结版本:

if derivative:
    J = - signal[..., None] * signal[:, None, :] # off-diagonal Jacobian
    iy, ix = np.diag_indices_from(J[0])
    J[:, iy, ix] = signal * (1. - signal) # diagonal
    return J.sum(axis=1) # sum across-rows for each sample

导数比较:

>>> signal = [[0.3394572666491664, 0.3089068053925853, 0.3516359279582483], [0.33932706934615525, 0.3094755563319447, 0.3511973743219001], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256], [0.3394407172182317, 0.30889042266755573, 0.35166886011421256]]
>>> e_x = np.exp( signal )
>>> signal = e_x / np.sum( e_x, axis = 1, keepdims = True )

你的:

>>> np.multiply( signal, 1 - signal ) + sum(
        # handle the off-diagonal values
        - signal * np.roll( signal, i, axis = 1 )
        for i in xrange(1, signal.shape[1] )
    )
array([[  2.77555756e-17,  -2.77555756e-17,   0.00000000e+00],
       [ -2.77555756e-17,  -2.77555756e-17,  -2.77555756e-17],
       [  2.77555756e-17,   0.00000000e+00,   2.77555756e-17],
       [  2.77555756e-17,   0.00000000e+00,   2.77555756e-17]])

矿:

>>> J = signal[..., None] * signal[:, None, :]
>>> iy, ix = np.diag_indices_from(J[0])
>>> J[:, iy, ix] = signal * (1. - signal)
>>> J.sum(axis=1)
array([[  4.16333634e-17,  -1.38777878e-17,   0.00000000e+00],
       [ -2.77555756e-17,  -2.77555756e-17,  -2.77555756e-17],
       [  2.77555756e-17,   1.38777878e-17,   2.77555756e-17],
       [  2.77555756e-17,   1.38777878e-17,   2.77555756e-17]])
于 2016-03-29T09:48:14.783 回答