我有一个包含 N 个细胞运动轨迹的 3D 点云数据,我想建立一个回归模型来预测细胞运动速度。
我创建了模拟数据集并将细胞移动速度设置为 0.00024 或 0.00014。
我希望我可以通过回归模型来预测这个值。
我使用 MAE 损失函数 nn.L1Loss() 来计算损失。
pred
tensor([ 0.09911, -1.06813, 0.21607, -1.30313, -0.11562, 0.92748, 0.53360,
1.27387, -0.07128, 0.40803, -1.00205, -0.21312, -0.84409, 0.30214,
-0.01497, 0.44172], device='cuda:0', grad_fn=<SelectBackward>)
targets
tensor([0.00024, 0.00024, 0.00024, 0.00024, 0.00024, 0.00024, 0.00014, 0.00014,
0.00024, 0.00014, 0.00014, 0.00024, 0.00014, 0.00024, 0.00014, 0.00024],
device='cuda:0', dtype=torch.float64)
loss
tensor(0.55215, device='cuda:0', grad_fn=<L1LossBackward>)
但在训练过程中,损失一开始非常低,很快收敛。但预测结果与真实值相差很大。我对数据进行了标准化,但仍然得到了类似的结果。
Ep: 0, loss: 0.20748 num_samples: 1600.000 pearson: 0.006 lr: 0.00100 sec: 12.48
Ep: 0, val_loss: 0.07817 val_num_samples: 200.000 val_pearson: 0.007 lr: -1.00000 sec: 0.65
Ep: 1, loss: 0.05833 num_samples: 1600.000 pearson: 0.035 lr: 0.00100 sec: 12.21
Ep: 1, val_loss: 0.05920 val_num_samples: 200.000 val_pearson: 0.041 lr: -1.00000 sec: 0.66
Ep: 2, loss: 0.03816 num_samples: 1600.000 pearson: -0.004 lr: 0.00100 sec: 12.24
Ep: 2, val_loss: 0.03205 val_num_samples: 200.000 val_pearson: 0.086 lr: -1.00000 sec: 0.65
Ep: 3, loss: 0.03777 num_samples: 1600.000 pearson: -0.015 lr: 0.00100 sec: 12.33
Ep: 3, val_loss: 0.02923 val_num_samples: 200.000 val_pearson: 0.011 lr: -1.00000 sec: 0.66
Ep: 4, loss: 0.02881 num_samples: 1600.000 pearson: -0.006 lr: 0.00100 sec: 12.26
Ep: 4, val_loss: 0.01197 val_num_samples: 200.000 val_pearson: 0.029 lr: -1.00000 sec: 0.66
Ep: 5, loss: 0.03122 num_samples: 1600.000 pearson: -0.006 lr: 0.00100 sec: 12.25
Ep: 5, val_loss: 0.09957 val_num_samples: 200.000 val_pearson: 0.053 lr: -1.00000 sec: 0.66
Ep: 6, loss: 0.03401 num_samples: 1600.000 pearson: 0.025 lr: 0.00100 sec: 12.23
Ep: 6, val_loss: 0.00949 val_num_samples: 200.000 val_pearson: -0.033 lr: -1.00000 sec: 0.66
Ep: 7, loss: 0.02854 num_samples: 1600.000 pearson: 0.004 lr: 0.00100 sec: 12.24
Ep: 7, val_loss: 0.05857 val_num_samples: 200.000 val_pearson: 0.081 lr: -1.00000 sec: 0.66
Ep: 8, loss: 0.02677 num_samples: 1600.000 pearson: -0.001 lr: 0.00100 sec: 12.25
Ep: 8, val_loss: 0.03597 val_num_samples: 200.000 val_pearson: -0.082 lr: -1.00000 sec: 0.65
Ep: 9, loss: 0.03201 num_samples: 1600.000 pearson: -0.063 lr: 0.00100 sec: 12.23
Ep: 9, val_loss: 0.02059 val_num_samples: 200.000 val_pearson: -0.036 lr: -1.00000 sec: 0.67
Ep: 10, loss: 0.02791 num_samples: 1600.000 pearson: 0.005 lr: 0.00100 sec: 12.28
Ep: 10, val_loss: 0.02280 val_num_samples: 200.000 val_pearson: 0.055 lr: -1.00000 sec: 0.67
Ep: 11, loss: 0.02798 num_samples: 1600.000 pearson: 0.046 lr: 0.00100 sec: 12.29
Ep: 11, val_loss: 0.03420 val_num_samples: 200.000 val_pearson: 0.058 lr: -1.00000 sec: 0.65
Ep: 12, loss: 0.02330 num_samples: 1600.000 pearson: 0.027 lr: 0.00100 sec: 12.31
Ep: 12, val_loss: 0.04230 val_num_samples: 200.000 val_pearson: -0.098 lr: -1.00000 sec: 0.67
Ep: 13, loss: 0.05035 num_samples: 1600.000 pearson: -0.001 lr: 0.00100 sec: 12.26
Ep: 13, val_loss: 0.09219 val_num_samples: 200.000 val_pearson: 0.076 lr: -1.00000 sec: 0.64
Ep: 14, loss: 0.02329 num_samples: 1600.000 pearson: 0.015 lr: 0.00100 sec: 12.28
Ep: 14, val_loss: 0.05160 val_num_samples: 200.000 val_pearson: 0.088 lr: -1.00000 sec: 0.66
Ep: 15, loss: 0.03910 num_samples: 1600.000 pearson: 0.058 lr: 0.00100 sec: 12.28
Ep: 15, val_loss: 0.03693 val_num_samples: 200.000 val_pearson: -0.074 lr: -1.00000 sec: 0.66
Ep: 16, loss: 0.02947 num_samples: 1600.000 pearson: 0.007 lr: 0.00100 sec: 12.29
Ep: 16, val_loss: 0.03596 val_num_samples: 200.000 val_pearson: -0.071 lr: -1.00000 sec: 0.66
Ep: 17, loss: 0.02730 num_samples: 1600.000 pearson: 0.030 lr: 0.00100 sec: 12.31
Ep: 17, val_loss: 0.02616 val_num_samples: 200.000 val_pearson: 0.092 lr: -1.00000 sec: 0.66
Ep: 18, loss: 0.02828 num_samples: 1600.000 pearson: -0.030 lr: 0.00100 sec: 12.32
Ep: 18, val_loss: 0.01098 val_num_samples: 200.000 val_pearson: 0.030 lr: -1.00000 sec: 0.65
Ep: 19, loss: 0.02978 num_samples: 1600.000 pearson: 0.009 lr: 0.00100 sec: 12.31
Ep: 19, val_loss: 0.02282 val_num_samples: 200.000 val_pearson: 0.078 lr: -1.00000 sec: 0.66
Ep: 20, loss: 0.01857 num_samples: 1600.000 pearson: 0.044 lr: 0.00100 sec: 12.31
Ep: 20, val_loss: 0.00776 val_num_samples: 200.000 val_pearson: 0.077 lr: -1.00000 sec: 0.65