-1

I've been using this tutorial for my reference on coding backpropagation. But, today, I've found another tutorial that has used same reference with me but with another approach in changing of synapse weight. What's different about the both of approach?


EDIT

Thank you Renan for your quick response.

The main difference is:

  1. First method is changing the synapse weight after calculate delta in each neuron (node).
  2. On the second method, The delta is calculated after calculate synapse weight based on delta from layer above.

note: I'll edit this explanation if still not clear. Thanks.

4

1 回答 1

0

相等计算

由于当前层的增量取决于上一层和当前层之间的权重层,因此这两种方法都是正确的。

但是在计算下一层的增量之前将输入权重调整到一层是不正确的!

方程

在此处输入图像描述

Here you can see the mathmatical equation for calculating the derivative 
of the Error with respect to the weights depends on the weights between 
this layer and the layer above. (using Sigmoid)
O_i = the layer below   # ex: input
O_k = the current layer # ex: hidden layer
O_o = the layer above   # ex: output layer
于 2013-09-06T07:21:28.970 回答