I'm using this article to implement a neural network with backpropagation, but having trouble calculating errors. In a nutshell, my sigmoid function is squashing all my node outputs to 1.0, which then causes the error calculation to return 0:
error = (expected - actual) * (1 - actual) * actual
^^ this term causes multiply by 0
And so my error is always 0.
I suspect that the problem lies with my sigmoid implementation, which is returning 1.0, rather than asymptotically bounding below 1.0:
# ruby
def sigmoid(x)
1/(1+Math.exp(-x))
end
Am I correct that sigmoid should never actually reach 1.0, or have I got something else wrong?