我对weighted
sklearn.metrics.f1_score 的平均值有疑问
sklearn.metrics.f1_score(y_true, y_pred, labels=None, pos_label=1, average='weighted', sample_weight=None)
Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall.
首先,如果有任何参考证明使用加权 F1 是合理的,我只是好奇在哪些情况下我应该使用加权 F1。
其次,我听说 weighted-F1 已弃用,是真的吗?
第三,实际加权 F1 的计算方式,例如
{
"0": {
"TP": 2,
"FP": 1,
"FN": 0,
"F1": 0.8
},
"1": {
"TP": 0,
"FP": 2,
"FN": 2,
"F1": -1
},
"2": {
"TP": 1,
"FP": 1,
"FN": 2,
"F1": 0.4
}
}
如何计算上述示例的加权 F1。我虽然应该是(0.8*2/3 + 0.4*1/3)/3,但我错了。