1

我想扩展这个脚本,以便它能够评估每个类的 top-k 精度。我希望归结为在以下代码片段中添加一个指标:

# Define the metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
    'Accuracy': slim.metrics.streaming_accuracy(predictions, labels),
    'Recall_5': slim.metrics.streaming_recall_at_k(
        logits, labels, 5), })

我已经按照这个评论添加了混淆矩阵,它允许我计算 top1 类内的准确度。但是,我不确定如何获得 top-k 值,因为我找不到合适的苗条指标。

澄清:

  • 我不是在寻找平均 top-k 准确度,而是在寻找每个类别的值。
  • 我能够使用基本张量来实现所需的计算,但我是苗条界面的新手,不知道如何使用上述脚本来完成。
4

1 回答 1

1

我终于找到了基于链接混淆矩阵示例的解决方案。

这与其说是一个漂亮的解决方案,不如说是一个调整,但它确实有效:我正在重用混淆矩阵和 top_k 预测。这些值存储在调整后的混淆矩阵的前两列中。

这是创建流式指标所必需的:

def _get_top_k_per_class_correct_predictions_streaming_metrics(softmax_output, labels, num_classes, top_k):
"""Function to aggregate the correct predictions per class according to the in top_k criteria.

:param softmax_output: The per class probabilities as predicted by the net.
:param labels: The ground truth data. No(!) one-hot encoding here.
:param num_classes: Total number of available classes.
:param top_k:
:return:
"""
with tf.name_scope("eval"):
    # create a list with <batch_size> elements. each element is either 1 (prediction correct) or 0 (false)
    batch_correct_prediction_top_k = tf.nn.in_top_k(softmax_output, labels, top_k,
                                                    name="batch_correct_prediction_top_{}".format(top_k))

    # the above output is boolean, but we need integers to sum them up
    batch_correct_prediction_top_k = tf.cast(batch_correct_prediction_top_k, tf.int32)

    # use the confusion matrix implementation to get the desired results
    # we actually need only the first two columns of the returned matrix.
    batch_correct_prediction_top_k_matrix = tf.confusion_matrix(labels, batch_correct_prediction_top_k,
                                                                num_classes=num_classes,
                                                                name='batch_correct_prediction_top{}_matrix'.format(
                                                                    top_k))

    correct_prediction_top_k_matrix = _create_local_var('correct_prediction_top{}_matrix'.format(top_k),
                                                        shape=[num_classes,
                                                               num_classes],
                                                        dtype=tf.int32)
    # Create the update op for doing a "+=" accumulation on the batch
    correct_prediction_top_k_matrix_update = correct_prediction_top_k_matrix.assign(
        correct_prediction_top_k_matrix + batch_correct_prediction_top_k_matrix)

return correct_prediction_top_k_matrix, correct_prediction_top_k_matrix_update

也:

def _create_local_var(name, shape, collections=None, validate_shape=True,
                  dtype=tf.float32):
"""Creates a new local variable.

This method is required to get the confusion matrix.
see https://github.com/tensorflow/models/issues/1286#issuecomment-317205632

Args:
  name: The name of the new or existing variable.
  shape: Shape of the new or existing variable.
  collections: A list of collection names to which the Variable will be added.
  validate_shape: Whether to validate the shape of the variable.
  dtype: Data type of the variables.
Returns:
  The created variable.
"""
# Make sure local variables are added to tf.GraphKeys.LOCAL_VARIABLES
collections = list(collections or [])
collections += [tf.GraphKeys.LOCAL_VARIABLES]
return variables.Variable(
    initial_value=tf.zeros(shape, dtype=dtype),
    name=name,
    trainable=False,
    collections=collections,
    validate_shape=validate_shape)

将新指标添加到 slim 配置并评估:

# Define the metrics:
softmax_output = tf.nn.softmax(logits, name="softmax_for_evaluation")
names_to_values, names_to_updates =    slim.metrics.aggregate_metric_map({
            [..]
            KEY_ACCURACY5_PER_CLASS_KEY_MATRIX: _get_top_k_per_class_correct_predictions_streaming_metrics(
                softmax_output, labels, self._dataset.num_classes - labels_offset, 5),
            [..]
        })

# evaluate
results = slim.evaluation.evaluate_once([..])

最后,您可以使用附加矩阵来计算每个类的 top_k 精度:

    def _calc_in_class_accuracy_top_k(self, results):
    """Calculate the top_k accuracies per class.

    :param results:
    :return:
    """

    # use a tweaked confusion matrix to calculate the in-class accuracy5
    # rows represent the real labels
    # the 1-th column contains the number of times that the associated class was correctly classified as one of the
    # top_k results. 0-th column contains the number of failed predictions. The sum is the total number of provided
    # samples per class.
    matrix_top_k = results[KEY_ACCURACY5_PER_CLASS_KEY_MATRIX]

    n_classes = matrix_top_k.shape[0]
    in_class_accuracy_top_k_per_class = np.zeros(n_classes, np.float)
    for id in range(n_classes):
        correct_top_k = matrix_top_k[id][1]
        total_occurrences = np.sum(matrix_top_k[id])  # this many samples of the current class exist in total

        # top_k accuracy
        in_class_accuracy_top_k_per_class[id] = correct_top_k
        if total_occurrences > 0:
            in_class_accuracy_top_k_per_class[id] /= total_occurrences

        # convert to floats
        in_class_accuracy_top_k_per_class[id] = float(in_class_accuracy_top_k_per_class[id])

    return in_class_accuracy_top_k_per_class
于 2018-02-12T13:41:56.300 回答