15

我一直在wikigold.conll NER 数据集上运行这个 LSTM 教程

training_data包含序列和标签的元组列表,例如:

training_data = [
    ("They also have a song called \" wake up \"".split(), ["O", "O", "O", "O", "O", "O", "I-MISC", "I-MISC", "I-MISC", "I-MISC"]),
    ("Major General John C. Scheidt Jr.".split(), ["O", "O", "I-PER", "I-PER", "I-PER"])
]

我写下了这个功能

def predict(indices):
    """Gets a list of indices of training_data, and returns a list of predicted lists of tags"""
    for index in indicies:
        inputs = prepare_sequence(training_data[index][0], word_to_ix)
        tag_scores = model(inputs)
        values, target = torch.max(tag_scores, 1)
        yield target

通过这种方式,我可以获得训练数据中特定索引的预测标签。

但是,我如何评估所有训练数据的准确度得分。

准确度是所有句子中正确分类的单词数量除以单词数。

这是我想出的,非常缓慢和丑陋:

y_pred = list(predict([s for s, t in training_data]))
y_true = [t for s, t in training_data]
c=0
s=0
for i in range(len(training_data)):
    n = len(y_true[i])
    #super ugly and ineffiicient
    s+=(sum(sum(list(y_true[i].view(-1, n) == y_pred[i].view(-1, n).data))))
    c+=n

print ('Training accuracy:{a}'.format(a=float(s)/c))

如何在 pytorch 中有效地做到这一点?

PS:我一直在尝试使用sklearn 的 accuracy_score未成功

4

2 回答 2

6

numpy为了不在纯python中迭代列表,我会使用它。

结果是一样的,但运行速度要快得多

def accuracy_score(y_true, y_pred):
    y_pred = np.concatenate(tuple(y_pred))
    y_true = np.concatenate(tuple([[t for t in y] for y in y_true])).reshape(y_pred.shape)
    return (y_true == y_pred).sum() / float(len(y_true))

这是如何使用它:

#original code:
y_pred = list(predict([s for s, t in training_data]))
y_true = [t for s, t in training_data]
#numpy accuracy score
print(accuracy_score(y_true, y_pred))
于 2017-05-23T09:33:20.383 回答
0

您可以像这样使用sklearn 的 accuracy_score

values, target = torch.max(tag_scores, -1)
accuracy = accuracy_score(train_y, target)
print("\nTraining accuracy is %d%%" % (accuracy*100))
于 2018-09-06T15:02:39.970 回答