I have built a simple recurrent neural network that predicts a very noisy signal from a series of 15 inputs (statistical breakdowns of the signal).
From what I can tell in the pybrain source (pybrain\supervised\trainers\backprop.py), the error function is hardcoded in the _calcDerivs
function as the sum of the squared errors divided by the total targets (MSE). The division happens in the train
function.
In my case, it is most important that the network predict the direction of signal change over the exact change amount, so I want to penalize the NN when it predicts down but signal moves up and vice-versa. I've been experimenting with passing _calcDerivs
not only the current target, but also the previous target and outputs, which I use to calculate a weight based on whether or not the target guessed the direction correctly, but the network fails to converge using both rprop and backprop. This whole thing is very hack-y to me.
My question is: Is there a best way to modify the default performance function? Is all of the performance function code kept in _calcDerivs
or am I missing something?