14

对于神经网络库,我实现了一些激活函数和损失函数及其衍生物。它们可以任意组合,输出层的导数只是损失导数和激活导数的乘积。

但是,我未能独立于任何损失函数来实现 Softmax 激活函数的导数。由于归一化,即等式中的分母,更改单个输入激活会更改所有输出激活,而不仅仅是一个。

这是我的 Softmax 实现,其中导数未能通过梯度检查约 1%。如何实现 Softmax 导数以便它可以与任何损失函数结合使用?

import numpy as np


class Softmax:

    def compute(self, incoming):
        exps = np.exp(incoming)
        return exps / exps.sum()

    def delta(self, incoming, outgoing):
        exps = np.exp(incoming)
        others = exps.sum() - exps
        return 1 / (2 + exps / others + others / exps)


activation = Softmax()
cost = SquaredError()

outgoing = activation.compute(incoming)
delta_output_layer = activation.delta(incoming) * cost.delta(outgoing)
4

5 回答 5

12

在数学上,Softmax σ(j) 对 logit Zi(例如,Wi*X)的导数是

在此处输入图像描述

其中红色三角洲是克罗内克三角洲。

如果您迭代地实现:

def softmax_grad(s):
    # input s is softmax value of the original input x. Its shape is (1,n) 
    # i.e.  s = np.array([0.3,0.7]),  x = np.array([0,1])

    # make the matrix whose size is n^2.
    jacobian_m = np.diag(s)

    for i in range(len(jacobian_m)):
        for j in range(len(jacobian_m)):
            if i == j:
                jacobian_m[i][j] = s[i] * (1 - s[i])
            else: 
                jacobian_m[i][j] = -s[i] * s[j]
    return jacobian_m

测试:

In [95]: x
Out[95]: array([1, 2])

In [96]: softmax(x)
Out[96]: array([ 0.26894142,  0.73105858])

In [97]: softmax_grad(softmax(x))
Out[97]: 
array([[ 0.19661193, -0.19661193],
       [-0.19661193,  0.19661193]])

如果您在矢量化版本中实现:

soft_max = softmax(x)    

# reshape softmax to 2d so np.dot gives matrix multiplication

def softmax_grad(softmax):
    s = softmax.reshape(-1,1)
    return np.diagflat(s) - np.dot(s, s.T)

softmax_grad(soft_max)

#array([[ 0.19661193, -0.19661193],
#       [-0.19661193,  0.19661193]])
于 2017-09-03T21:34:55.433 回答
11

它应该是这样的:(x 是 softmax 层的输入,dy 是来自它上面的损失的增量)

    dx = y * dy
    s = dx.sum(axis=dx.ndim - 1, keepdims=True)
    dx -= y * s

    return dx

但是您计算错误的方式应该是:

    yact = activation.compute(x)
    ycost = cost.compute(yact)
    dsoftmax = activation.delta(x, cost.delta(yact, ycost, ytrue)) 

说明:因为该delta函数是反向传播算法的一部分,所以它的职责是将向量dy(在我的代码中,在您的情况下)乘以在处评估outgoing的函数的雅可比行列式。如果你计算出这个雅可比矩阵对于 softmax [1] 的样子,然后从左边乘以一个向量,经过一些代数之后,你会发现你得到了与我的 Python 代码相对应的东西。compute(x)xdy

[1] https://stats.stackexchange.com/questions/79454/softmax-layer-in-a-neural-network

于 2015-11-07T08:15:30.933 回答
2

其他答案很好,这里分享一个简单的实现forward/backward,不管损失函数。

在下图中,它是backward对 softmax 的简要推导。第二个等式依赖于损失函数,不是我们实现的一部分。 在此处输入图像描述

backward通过手动毕业检查验证。

import numpy as np

class Softmax:
    def forward(self, x):
        mx = np.max(x, axis=1, keepdims=True)
        x = x - mx  # log-sum-exp trick
        e = np.exp(x)
        probs = e / np.sum(np.exp(x), axis=1, keepdims=True)
        return probs

    def backward(self, x, probs, bp_err):
        dim = x.shape[1]
        output = np.empty(x.shape)
        for j in range(dim):
            d_prob_over_xj = - (probs * probs[:,[j]])  # i.e. prob_k * prob_j, no matter k==j or not
            d_prob_over_xj[:,j] += probs[:,j]   # i.e. when k==j, +prob_j
            output[:,j] = np.sum(bp_err * d_prob_over_xj, axis=1)
        return output

def compute_manual_grads(x, pred_fn):
    eps = 1e-3
    batch_size, dim = x.shape

    grads = np.empty(x.shape)
    for i in range(batch_size):
        for j in range(dim):
            x[i,j] += eps
            y1 = pred_fn(x)

            x[i,j] -= 2*eps
            y2 = pred_fn(x)

            grads[i,j] = (y1 - y2) / (2*eps)
            x[i,j] += eps
    return grads

def loss_fn(probs, ys, loss_type):
    batch_size = probs.shape[0]
    # dummy mse
    if loss_type=="mse":
        loss = np.sum((np.take_along_axis(probs, ys.reshape(-1,1), axis=1) - 1)**2) / batch_size
        values = 2 * (np.take_along_axis(probs, ys.reshape(-1,1), axis=1) - 1) / batch_size

    # cross ent
    if loss_type=="xent":
        loss = - np.sum( np.take_along_axis(np.log(probs), ys.reshape(-1,1), axis=1) ) / batch_size
        values = -1 / np.take_along_axis(probs, ys.reshape(-1,1), axis=1) / batch_size

    err = np.zeros(probs.shape)
    np.put_along_axis(err, ys.reshape(-1,1), values, axis=1)
    return loss, err

if __name__ == "__main__":
    batch_size = 10
    dim = 5

    x = np.random.rand(batch_size, dim)
    ys = np.random.randint(0, dim, batch_size)
    for loss_type in ["mse", "xent"]:
        S = Softmax()
        probs = S.forward(x)
        loss, bp_err = loss_fn(probs, ys, loss_type)

        grads = S.backward(x, probs, bp_err)

        def pred_fn(x, ys):
            pred = S.forward(x)
            loss, err = loss_fn(pred, ys, loss_type)
            return loss

        manual_grads = compute_manual_grads(x, lambda x: pred_fn(x, ys))

        # compare both grads
        print(f"loss_type = {loss_type}, grad diff = {np.sum((grads - manual_grads)**2) / batch_size}")
于 2021-04-17T06:00:51.040 回答
1

以防万一您批量处理,这里是 NumPy 中的一个实现(经过测试与 TensorFlow)。但是,我建议通过将雅可比与交叉熵混合来避免相关的张量运算,这会导致非常简单和有效的表达式。

def softmax(z):
  exps = np.exp(z - np.max(z))
  return exps / np.sum(exps, axis=1, keepdims=True)

def softmax_jacob(s):
  return np.einsum('ij,jk->ijk', s, np.eye(s.shape[-1])) \
       - np.einsum('ij,ik->ijk', s, s)

def np_softmax_test(z):
  return softmax_jacob(softmax(z))

def tf_softmax_test(z):
  z = tf.constant(z, dtype=tf.float32)
  with tf.GradientTape() as g:
    g.watch(z)
    a = tf.nn.softmax(z) 
  jacob = g.batch_jacobian(a, z)
  return jacob.numpy()

z = np.random.randn(3, 5)
np.all(np.isclose(np_softmax_test(z), tf_softmax_test(z)))
于 2020-06-30T01:32:13.443 回答
0

这是一个 c++ 矢量化版本,使用内在函数(比非 SSE 版本快 22 倍(!)):

// How many floats fit into __m256 "group".
// Used by vectors and matrices, to ensure their dimensions are appropriate for 
// intrinsics.
// Otherwise, consecutive rows of matrices will not be 16-byte aligned, and 
// operations on them will be incorrect.
#define F_MULTIPLE_OF_M256 8


//check to quickly see if your rows are divisible by m256.
//you can 'undefine' to save performance, after everything was verified to be correct.
#define ASSERT_THE_M256_MULTIPLES
#ifdef ASSERT_THE_M256_MULTIPLES
    #define assert_is_m256_multiple(x)  assert( (x%F_MULTIPLE_OF_M256) == 0)
#else
    #define assert_is_m256_multiple (q) 
#endif


// usually used at the end of our Reduce functions,
// where the final __m256 mSum needs to be collapsed into 1 scalar.
static inline float slow_hAdd_ps(__m256 x){
    const float *sumStart = reinterpret_cast<const float*>(&x);
    float sum = 0.0f;

    for(size_t i=0; i<F_MULTIPLE_OF_M256; ++i){
        sum += sumStart[i];
    }
    return sum;
}



f_vec SoftmaxGrad_fromResult(const float *softmaxResult,  size_t size,  
                             const float *gradFromAbove){//<--gradient vector, flowing into us from the above layer
assert_is_m256_multiple(size);
//allocate vector, where to store output:
f_vec grad_v(size, true);//true: skip filling with zeros, to save performance.

const __m256* end   = (const __m256*)(softmaxResult + size);


for(size_t i=0; i<size; ++i){// <--for every row
    //go through this i'th row:
    __m256 sum =  _mm256_set1_ps(0.0f);

    const __m256 neg_sft_i  =  _mm256_set1_ps( -softmaxResult[i] );
    const __m256 *s  =  (const __m256*)softmaxResult;
    const __m256 *gAbove  =   (__m256*)gradFromAbove;

    for (s;  s<end; ){
        __m256 mul =  _mm256_mul_ps(*s, neg_sft_i);  //  sftmaxResult_j  *  (-sftmaxResult_i)
        mul =  _mm256_mul_ps( mul, *gAbove );

        sum =  _mm256_add_ps( sum,  mul );//adding to the total sum of this row.
        ++s;
        ++gAbove;
    }
    grad_v[i]  =  slow_hAdd_ps( sum );//collapse the sum into 1 scalar (true sum of this row).
}//end for every row

//reset back to start and subtract a vector, to account for Kronecker delta:
__m256 *g =  (__m256*)grad_v._contents;
__m256 *s =  (__m256*)softmaxResult;
__m256 *gAbove =  (__m256*)gradFromAbove;

for(s; s<end; ){
    __m256 mul = _mm256_mul_ps(*s, *gAbove);
    *g = _mm256_add_ps( *g, mul );
    ++s; 
    ++g;
}

return grad_v;

}

如果由于某种原因有人想要一个简单的(非 SSE)版本,这里是:

inline static void SoftmaxGrad_fromResult_nonSSE(const float* softmaxResult,  
                                                 const float *gradFromAbove,  //<--gradient vector, flowing into us from the above layer
                                                 float *gradOutput,  
                                                 size_t count ){
    // every pre-softmax element in a layer contributed to the softmax of every other element
    // (it went into the denominator). So gradient will be distributed from every post-softmax element to every pre-elem.
    for(size_t i=0; i<count; ++i){
        //go through this i'th row:
        float sum =  0.0f;

        const float neg_sft_i  =  -softmaxResult[i];

        for(size_t j=0; j<count; ++j){
            float mul =  gradFromAbove[j] * softmaxResult[j] * neg_sft_i;
            sum +=  mul;//adding to the total sum of this row.
        }
        //NOTICE: equals, overwriting any old values:
        gradOutput[i]  =  sum;
    }//end for every row

    for(size_t i=0; i<count; ++i){
        gradOutput[i] +=  softmaxResult[i] * gradFromAbove[i];
    }
}
于 2019-07-11T18:33:32.320 回答