1

我最初基于在线教程构建了一个仅 numpy 的神经网络,并且已经意识到我应该有某种偏置神经元。但是,我真的一直在努力弄清楚如何在我的代码中实现它,并且非常感谢一些指导。

import numpy as np

class NN():   
    def __init__(self, layers, type):
        """
        layers: a list of layers, eg:
              2 input neurons
              1 hidden layer of 3 neurons
              2 output neurons
              will look like [2,3,2]
        type: initialisation type, "random" or "uniform" distribution
        """

        self.p = 0.1

        self.layers = len(layers) - 1

        self.inputSize = layers[0]
        self.outputSize = layers[self.layers]

        self.layerSizes = layers[:-1] #input layer, hiddens, discard output layer

        self.inputs = np.zeros(self.inputSize, dtype=float)
        self.outputs = np.zeros(self.outputSize, dtype=float)

        self.L = {}

        if type == "random":
            for i in range(1,self.layers+1):
                if i < self.layers:
                    self.L[i] = (np.random.ranf(( self.layerSizes[i-1] , self.layerSizes[i] )).astype(np.float) - 0.5) * 2
                else:
                    self.L[i] = (np.random.ranf(( self.layerSizes[i-1] , self.outputSize )).astype(np.float) - 0.5)*2
        elif type == "uniform":            
            for i in range(1,self.layers+1):
                if i < self.layers:
                    self.L[i] = np.random.uniform( -1 , 1 , (self.layerSizes[i-1],self.layerSizes[i]) )
                else:
                    self.L[i] = np.random.uniform( -1 , 1 , (self.layerSizes[i-1],self.outputSize) )

        else:
            print("unknown initialization type")

    def updateS(self): #forward propogation Sigmoid
        for i in range(1,self.layers+1):
            if 1 == self.layers:  #dodgy no hidden layers fix
                self.z = np.dot(self.inputs, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2           
            elif i == 1:  #input layer
                self.z = np.dot(self.inputs, self.L[i])
                self.temp = self.sigmoid(self.z)
            elif i < self.layers: #hidden layers
                self.z = np.dot(self.temp, self.L[i])
                self.temp = self.sigmoid(self.z)
            else: #output layer
                self.z = np.dot(self.temp, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2

    def sigmoid(self, s):
        #activation funtion
        return 1/(1+np.exp(-s/self.p))
4

1 回答 1

0

偏差只是您在神经网络前馈过程中添加到每个神经元的变量。因此,从一个神经元层到下一个神经元层的前馈过程将是所有权重的总和乘以馈入下一个神经元的前一个神经元,然后将添加该神经元的偏差,或者:

输出 = 总和(权重 * 输入)+偏差

为了说明这一点,请看下图:

神经网络示例图片

在哪里:

X1: Input value 1.

X2: Input value 2.

B1n: Layer 1, neuron n bias.

H1: Hidden layer neuron 1.

H2: Hidden layer neuron 2.

a(…): activation function.

B2n: Layer 2, neuron n bias.

Y1: network output neuron 1.

Y2: network output neuron 2.

Y1out: network output 1.

Y2out: network output 2.

T1: Training output 1.

T2: Training output 2.

在计算 H1 时,您需要使用以下公式:

H1 = (X1 * W1) + (X2 * W2) + B11    

请注意,这是在通过激活函数完全计算神经元的值之前。

因此,我很确定将在前馈函数中输入偏差:

def updateS(self): #forward propogation Sigmoid
        for i in range(1,self.layers+1):
            if 1 == self.layers:  #dodgy no hidden layers fix
                self.z = np.dot(self.inputs, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2           
            elif i == 1:  #input layer
                self.z = np.dot(self.inputs, self.L[i])
                self.temp = self.sigmoid(self.z)
            elif i < self.layers: #hidden layers
                self.z = np.dot(self.temp, self.L[i])
                self.temp = self.sigmoid(self.z)
            else: #output layer
                self.z = np.dot(self.temp, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2

通过在 self.z 值的末尾添加一个值。我认为这些值可以是您想要的任何值,因为偏差只是移动了线性方程的截距

于 2019-11-23T13:44:42.327 回答