-1

I have a neural network of the form N = W1 * Tanh(W2 * I), where I is the Input Vector/Matrix. When I learn these weights the output has a certain form. However, when I add a normalization layer, for example, N' = Softmax( W1 * Tanh(W2 * I) ) however the in the output vector of N' a single element is close to 1 while the rest are almost zero. This is the case, not only with SoftMax() but with any normalizing layer. Is there any standard solution to such a problem?

4

1 回答 1

0

That is the behavior of the softmax function. Perhaps what you need is a sigmoid function.

于 2017-10-22T04:20:00.343 回答