0

我是 AWS、DynamoDB 和 Python 的新手,所以我正在努力完成这项任务。我正在使用带有视频的 Amazon Transcribe 并在 JSON 文件中获取输出。然后我希望将这些数据存储在 DynamoDB 中。

目前,当 JSON 文件转储到 S3 存储桶中时,我正在使用 Lambda 函数来自动化该过程。每当该过程发生时,我都会在 CloudWatch 中收到错误消息:

[ERROR] ClientError: An error occurred (ValidationException) when calling the PutItem operation: One or more parameter values were invalid: Missing the key type in the item
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 79, in lambda_handler
    table.put_item(Item=jsonDict) # Adds string of JSON file into the database
  File "/var/runtime/boto3/resources/factory.py", line 520, in do_action
    response = action(self, *args, **kwargs)
  File "/var/runtime/boto3/resources/action.py", line 83, in __call__
    response = getattr(parent.meta.client, operation_name)(**params)
  File "/var/runtime/botocore/client.py", line 320, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/var/runtime/botocore/client.py", line 623, in _make_api_call
    raise error_class(parsed_response, operation_name)

这是我尝试创建 DynamoDB 表然后解析 JSON 文件的 Python 代码:

import boto3  # import to pull AWS SDK for Python
import json  # import API for Python to work with JSON files
import time  # import time fucntions
s3_client = boto3.client('s3')  # creates low-level service client to AWS S3
dynamodb = boto3.resource('dynamodb')  # creates resource client to AWS DynamoDB


# When a .JSON file is added into the linked S3 bucket, another JSON file is created which contains
# the information about the S3 Bucket and the name of the file that was added to the bucket

def lambda_handler(event, context):
    print(str(event))

    # Print the JSON file created by S3 into CloudWatch Logs when an item is added into the bucket

    bucket = event['Records'][0]['s3']['bucket']['name']

    # Here the name of the S3 bucket is assigned to the variable 'bucket'
    # by grabbing the name from the JSON file created

    json_file_name = event['Records'][0]['s3']['object']['key']

    # Here the name of the file itself is assigned to the varibale 'json_file_name'
    # again by grabbing the name of the added file from the JSON file

    tname = json_file_name[:-5]

    # Defines the name of the table being added to dynamodb by using the name of S3 JSON file
    # *Use of [:.5] will strip the last five characters off the end of the file name

    print(tname)

    # Prints the name of the table into CloudWatch Logs

    json_object = s3_client.get_object(Bucket=bucket,Key=json_file_name)

    # The json_object variable is assigned the values of the 'bucket' and the 'json_file_name'
    # This uses the boto3 client service and the rest of the script will reference the specified
    # S3 bucket and the JSON file that was added to the bucket

    jsonFileReader = json_object['Body'].read()

    # The jsonFileReader variable takes this object and read the body of JSON file

    jsonDict = json.loads(jsonFileReader)

    # Using the json.loads function, the arrays of the JSON file are converted into a string

    table = dynamodb.create_table(
        TableName=tname, ## Define table name from name of JSON file in S3
    KeySchema=[
        {
            'AttributeName': 'type', #Primary Key
            'KeyType': 'HASH'  #Partition Key
        }
    ],
    AttributeDefinitions=[
        {
            'AttributeName': 'type',
            'AttributeType': 'S' #AttributeType N meas 'Number'
        }

    ],
    ProvisionedThroughput=
        {
            'ReadCapacityUnits': 10000,
            'WriteCapacityUnits': 10000
        }
    )

#    table.meta.client.get_waiter('table_exists').wait(TableName=tname)
    print(str(jsonDict))


    table.meta.client.get_waiter('table_exists').wait(TableName=tname)

    table = dynamodb.Table(tname)  # Specifies table to be used

    table.put_item(Item=jsonDict)  # Adds string of JSON file into the database

我对解析嵌套的 JSON 文件不是很熟悉,也没有使用 DynamoDB 的经验。获得此功能的任何帮助都将非常有帮助!

这是我要解析的 JSON 文件:

{
    "results": {
        "items": [{
            "start_time": "15.6",
            "end_time": "15.95",
            "alternatives": [{
                "confidence": "0.6502",
                "content": "Please"
            }],
            "type": "pronunciation"
        }, {
            "alternatives": [{
                "confidence": null,
                "content": "."
            }],
            "type": "punctuation"
        }, {
            "start_time": "15.95",
            "end_time": "16.2",
            "alternatives": [{
                "confidence": "0.9987",
                "content": "And"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "16.21",
            "end_time": "16.81",
            "alternatives": [{
                "confidence": "0.9555",
                "content": "bottles"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "16.81",
            "end_time": "17.01",
            "alternatives": [{
                "confidence": "0.7179",
                "content": "of"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "17.27",
            "end_time": "17.36",
            "alternatives": [{
                "confidence": "0.6274",
                "content": "rum"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "18.12",
            "end_time": "18.5",
            "alternatives": [{
                "confidence": "0.9977",
                "content": "with"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "18.5",
            "end_time": "19.1",
            "alternatives": [{
                "confidence": "0.3689",
                "content": "tattoos"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "19.11",
            "end_time": "19.59",
            "alternatives": [{
                "confidence": "1.0000",
                "content": "like"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "19.59",
            "end_time": "20.22",
            "alternatives": [{
                "confidence": "0.9920",
                "content": "getting"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "20.22",
            "end_time": "20.42",
            "alternatives": [{
                "confidence": "0.5659",
                "content": "and"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "20.43",
            "end_time": "20.97",
            "alternatives": [{
                "confidence": "0.6694",
                "content": "juggle"
            }],
            "type": "pronunciation"
        }, {
            "start_time": "21.2",
            "end_time": "21.95",
            "alternatives": [{
                "confidence": "0.8893",
                "content": "lashes"
            }],
            "type": "pronunciation"
        }, {
            "alternatives": [{
                "confidence": null,
                "content": "."
            }],
            "type": "punctuation"
        }, {
            "start_time": "21.95",
            "end_time": "22.19",
            "alternatives": [{
                "confidence": "1.0000",
                "content": "And"
            }]
        }]
    }
}

我似乎遇到的另一个问题是如何处理标点符号,因为 AWS Transcribe 没有为这些项目分配时间戳。

任何帮助表示赞赏,谢谢!

4

1 回答 1

0

数据库中重要的是key,这对于您的数据行应该是唯一的。在您的情况下[选项 1],如果您打算将每个 json 文件放在单独的表名 (tname) 中,则必须提供一组唯一值,在这种情况下似乎是start_time。或者 [选项 2]您可以将所有当前和未来数据放在同一个表中,并将键保留为 tname assummin,这对于数据来说是唯一的。

选项1

将 KeySchema 和 AttributeDefinitions 中的“AttributeName”:“type”替换为“AttributeName”:“ start_time

##This is a way to batch write
with table.batch_writer() as batch:
    for item in jsonDict["results"]["items"]:   
        batch.put_item(Item=item)

选项 2

在这里,您不应该每次都创建表。只需创建一次,然后将每个条目添加到数据库中。在下面的代码中,表名是“commonTable”

KeySchema 和 AttributeDefinitions 中的 'AttributeName': 'type' 替换为 'AttributeName': 'tname'

table.meta.client.get_waiter('table_exists').wait(TableName="commonTable")
table = dynamodb.Table(tname)  # Specifies table to be used
jsonDict['tname'] = tname      # this is also the key name 'tname'
table.put_item(Item=jsonDict)
于 2019-05-14T12:36:44.883 回答