在计算机作业中,要求实现 word2vec 算法以使用神经网络为某些单词生成密集向量。我实现了神经网络并通过训练数据对其进行了训练。首先,如何通过测试数据对其进行测试?该问题要求绘制一个图表,显示训练(时期)期间训练和测试数据的困惑度。我可以为损失做到这一点,就像这样:
EPOCH: 0 LOSS: 27030.09155006593
EPOCH: 0 P_LOSS: 24637.964948774144
EPOCH: 0 PP: inf
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:121: RuntimeWarning: overflow encountered in double_scalars
EPOCH: 1 LOSS: 25349.086587261085
EPOCH: 1 P_LOSS: 22956.95998596929
EPOCH: 1 PP: inf
EPOCH: 2 LOSS: 24245.455581381622
EPOCH: 2 P_LOSS: 21853.32898008983
EPOCH: 2 PP: inf
EPOCH: 3 LOSS: 23312.976009712416
EPOCH: 3 P_LOSS: 20920.849408420647
我通过以下代码获得:
# CYCLE THROUGH EACH EPOCH
for i in range(0, self.epochs):
self.loss = 0
self.loss_prob = 0
# CYCLE THROUGH EACH TRAINING SAMPLE
for w_t, w_c in training_data:
# FORWARD PASS
y_pred, h, u = self.forward_pass(w_t)
# CALCULATE ERROR
EI = np.sum([np.subtract(y_pred, word) for word in w_c], axis=0)
# BACKPROPAGATION
self.backprop(EI, h, w_t)
# CALCULATE LOSS
self.loss += -np.sum([u[word.index(1)] for word in w_c]) + len(w_c) * np.log(np.sum(np.exp(u)))
self.loss_prob += -2*np.log(len(w_c)) -np.sum([u[word.index(1)] for word in w_c]) + (len(w_c) * np.log(np.sum(np.exp(u))))
print('EPOCH:',i, 'LOSS:', self.loss)
print('EPOCH:',i, 'P_LOSS:', self.loss_prob)
print('EPOCH:',i, 'PP:', 2**self.loss_prob)
但是,我不知道如何找到每个 epoch 中训练和开发数据的困惑。基于这个问题,据说困惑是2**loss
。然而,当我尝试这个公式时,我获得了INF
. 你能指导我如何计算困惑度吗?我可以在当前代码中执行此操作,还是应该对整个开发数据应用一个函数?