2

你知道为什么这个网络不想学习吗?这个想法是它在前面的层中使用 ReLU 作为激活函数,在最后一层中使用 sigmoid 作为激活函数。当我只使用 sigmoid 时,网络学得很好。为了验证网络,我使用了 MNIST。

def sigmoid( z ):
    return 1.0 / (1.0 + np.exp(-z))

def sigmoid_prime(z):
    return sigmoid(z)*(1-sigmoid(z))

def RELU(z):
    return z*(z>0)

def RELU_Prime(z):
    return (z>0)

    # x - training data in mnist for example (1,784) vector
    # y - training label in mnist for example (1,10) vector
    # nabla is gradient for the current x and y 
    def backprop(self, x, y):
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # feedforward
        activation = x
        activations = [x] # list to store all the activations, layer by layer
        zs = [] # list to store all the z vectors, layer by layer
        index =0
        for b, w in zip(self.biases, self.weights):
            z = np.dot(w, activation)+b
            zs.append(z)
            if index == len(self.weights)-1:
                activation = sigmoid(z)
            #previous layers are RELU
            else:
                activation = RELU(z)

            activations.append(activation)
            index +=1
        # backward pass
        delta = self.cost_derivative(activations[-1], y) *\
             sigmoid_prime(zs[-1])

        nabla_b[-1] = delta

        nabla_w[-1] = np.dot(delta, activations[-2].transpose())
        for l in range(2, self.num_layers):
            z = zs[-l]
            sp = RELU_Prime(z)
            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
        return (nabla_b, nabla_w)

- - - - - - - - 编辑 - - - - - - - - - - - - - - -

    def cost_derivative(self, output_activations, y):
        return (output_activations-y)

--------------- 编辑 2 -----------------

      self.weights = [w-(eta/len(mini_batch))*nw
                       for w, nw in zip(self.weights, nabla_w)]
       self.biases = [b-(eta/len(mini_batch))*nb
                      for b, nb in zip(self.biases, nabla_b)]

η > 0

4

1 回答 1

0

对于那些将来的人来说,这个问题的答案很简单但隐藏:)。事实证明权重初始化是错误的。要使其工作,您必须使用 Xavier 初始化并将其乘以 2。

于 2020-11-18T01:26:04.993 回答