I have a tensor of shape [batch_size, channel, depth, height, width]
:
torch.Size([1, 1, 32, 64, 64])
with data:
tensor([[[[[-1.8540, -2.8068, -2.7348, ..., -1.9074, -1.8227, -1.4540],
[-2.7012, -4.2785, -3.7421, ..., -3.1961, -2.7786, -1.8042],
[-2.1924, -4.2202, -4.4361, ..., -3.1203, -2.9282, -2.3800],
...,
[-2.7429, -4.3133, -4.4029, ..., -4.4971, -5.3288, -2.8659],
[-3.0169, -4.0198, -3.6886, ..., -3.7542, -4.5010, -2.4040],
[-1.6174, -2.5340, -2.3974, ..., -1.9249, -2.4107, -1.2664]],
[[-2.7840, -3.2442, -3.6118, ..., -3.1365, -2.8342, -1.9516],
[-3.5764, -4.9253, -5.9196, ..., -4.8373, -4.2233, -3.3809],
[-3.1701, -5.0826, -5.6424, ..., -5.2955, -4.6438, -3.4820],
...,
[-4.0111, -6.1946, -5.6582, ..., -6.7947, -6.5305, -4.2866],
[-4.2103, -6.6177, -6.0420, ..., -5.8076, -6.2128, -3.2093],
[-2.3174, -4.1081, -3.7369, ..., -3.5552, -3.1871, -1.9736]],
[[-2.8441, -4.1575, -3.8233, ..., -3.5065, -3.4313, -2.3030],
[-4.0076, -5.4939, -6.2451, ..., -4.6663, -4.9835, -3.1530],
[-3.4737, -5.6347, -6.0232, ..., -5.6191, -5.2626, -3.6109],
...,
[-3.8026, -5.3676, -6.1460, ..., -7.6695, -6.7640, -4.1681],
[-4.4012, -6.1293, -6.1859, ..., -6.0011, -6.1012, -3.5307],
[-2.7917, -4.2264, -4.1388, ..., -4.2080, -3.5555, -1.6384]],
...,
[[-2.2204, -3.5705, -4.3114, ..., -4.2249, -3.9628, -2.9190],
[-3.6343, -5.3445, -6.1638, ..., -6.3998, -6.7561, -4.8491],
[-3.4870, -5.5835, -5.6436, ..., -6.8527, -7.2536, -4.8143],
...,
[-2.4492, -3.7896, -5.4344, ..., -6.2853, -6.0766, -3.7538],
[-2.4723, -3.8393, -4.8480, ..., -5.6503, -5.0375, -3.5580],
[-1.6161, -2.9843, -3.2865, ..., -3.2627, -3.2887, -2.5750]],
[[-2.1509, -3.8303, -4.2807, ..., -3.7945, -3.7561, -3.0863],
[-3.1012, -5.1321, -6.1387, ..., -6.5191, -6.3268, -4.4283],
[-2.8346, -5.0640, -5.4868, ..., -6.6515, -6.5529, -4.3672],
...,
[-2.7278, -4.2538, -4.9776, ..., -6.4153, -6.0100, -3.9929],
[-2.8002, -4.0473, -4.7455, ..., -5.4203, -4.7286, -3.4111],
[-1.7964, -3.2307, -3.6329, ..., -3.2750, -2.3952, -1.9714]],
[[-1.4447, -2.1572, -2.4487, ..., -2.3859, -2.9540, -1.8451],
[-1.8075, -2.8380, -3.5621, ..., -3.8641, -3.5828, -2.7304],
[-1.7862, -2.9849, -3.8364, ..., -4.3380, -4.4745, -2.8476],
...,
[-1.8043, -2.5662, -2.7296, ..., -4.2772, -3.9882, -2.8654],
[-1.2364, -2.5228, -2.7190, ..., -4.1142, -3.6160, -2.2325],
[-1.0395, -1.7621, -2.5738, ..., -2.0349, -1.5140, -1.1625]]]]]
Now to get the prediction from this I use
torch.argmax(data, 1)
which should give me the location of maximum values in the channel dimension, but instead I get a tensor containing only zeros. Even max(torch.argmax())
produces 0
.
How can this be, the tensor is only a single dimension and a single batch, how can it return a 0?
To get rid of the negative values I applied torch.nn.Sigmoid()
on it, but still argmax
failed to find a maximum value. Which I dont understand, how can there not be a maximum value?
numpy.argmax(output.detach().numpy(), 1)
gives the same output, all 0
.
Am I not using argmax correctly?