-1

我是使用Amazon Rekognition分析视频人脸的新手。

我正在使用startFaceSearch开始我的分析。作业成功完成后,我使用生成的 JobId 来调用getFaceSearch

在我分析的第一个视频中,结果符合预期。但是当我分析第二个例子时,发生了一些奇怪的行为,我不明白为什么。

查看作为我的第二个视频的结果生成的 JSON,完全不同的面孔用相同的索引号标识。

请看下面的结果。

{
    "Timestamp": 35960,
    "Person": {
        "Index": 11,
        "BoundingBox": {
            "Width": 0.09375,
            "Height": 0.24583333730698,
            "Left": 0.1875,
            "Top": 0.375
        },
        "Face": {
            "BoundingBox": {
                "Width": 0.06993006914854,
                "Height": 0.10256410390139,
                "Left": 0.24475525319576,
                "Top": 0.375
            },
            "Landmarks": [
                {
                    "Type": "eyeLeft",
                    "X": 0.26899611949921,
                    "Y": 0.40649232268333
                },
                {
                    "Type": "eyeRight",
                    "X": 0.28330621123314,
                    "Y": 0.41610333323479
                },
                {
                    "Type": "nose",
                    "X": 0.27063181996346,
                    "Y": 0.43293061852455
                },
                {
                    "Type": "mouthLeft",
                    "X": 0.25983560085297,
                    "Y": 0.44362303614616
                },
                {
                    "Type": "mouthRight",
                    "X": 0.27296212315559,
                    "Y": 0.44758656620979
                }
            ],
            "Pose": {
                "Roll": 22.106262207031,
                "Yaw": 6.3516845703125,
                "Pitch": -6.2676968574524
            },
            "Quality": {
                "Brightness": 41.875026702881,
                "Sharpness": 65.948883056641
            },
            "Confidence": 90.114051818848
        }
    }
}

{
    "Timestamp": 46520,
    "Person": {
        "Index": 11,
        "BoundingBox": {
            "Width": 0.19034090638161,
            "Height": 0.42083331942558,
            "Left": 0.30681818723679,
            "Top": 0.17916665971279
        },
        "Face": {
            "BoundingBox": {
                "Width": 0.076486013829708,
                "Height": 0.11217948794365,
                "Left": 0.38680067658424,
                "Top": 0.26923078298569
            },
            "Landmarks": [
                {
                    "Type": "eyeLeft",
                    "X": 0.40642243623734,
                    "Y": 0.32347011566162
                },
                {
                    "Type": "eyeRight",
                    "X": 0.43237379193306,
                    "Y": 0.32369664311409
                },
                {
                    "Type": "nose",
                    "X": 0.42121160030365,
                    "Y": 0.34618207812309
                },
                {
                    "Type": "mouthLeft",
                    "X": 0.41044121980667,
                    "Y": 0.36520344018936
                },
                {
                    "Type": "mouthRight",
                    "X": 0.43202903866768,
                    "Y": 0.36483728885651
                }
            ],
            "Pose": {
                "Roll": 0.3165397644043,
                "Yaw": 2.038902759552,
                "Pitch": -1.9931464195251
            },
            "Quality": {
                "Brightness": 54.697460174561,
                "Sharpness": 53.806159973145
            },
            "Confidence": 95.216400146484
        }
    }
}

事实上,在这个视频中,所有面孔都有相同的索引号,无论它们是否不同。有什么建议么?

4

1 回答 1

0

PersonDetail 对象是 API 的结果。“index”是在视频中检测到的人的标识符。所以索引不跨越视频。它只是一个内部参考。

链接下方的详细信息索引 https://docs.aws.amazon.com/rekognition/latest/dg/API_PersonDetail.html

于 2018-05-18T11:46:53.523 回答