4

我正在使用拒绝任何非 SSL 通信和 UnEncryptedObjectUploads 的存储桶策略。

{
    "Id": "Policy1361300844915",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyUnSecureCommunications",
            "Action": "s3:*",
            "Effect": "Deny",
            "Resource": "arn:aws:s3:::my-bucket",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": false
                }
            },
            "Principal": {
                "AWS": "*"
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Action": "s3:PutObject",
            "Effect": "Deny",
            "Resource": "arn:aws:s3:::my-bucket/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            },
            "Principal": {
                "AWS": "*"
            }
        }
    ]
}

此策略适用于支持 SSL 和 SSE 设置的应用程序,但仅适用于正在上传的对象。

我遇到了这些问题:

  1. CloudBerry Explorer 和 S3 浏览器在使用该存储桶策略的存储桶中重命名文件夹和文件期间失败。在我在存储桶策略中仅应用 SSL 要求后,这些浏览器成功完成了文件/文件夹重命名。

CloudBerry Explorer 仅在我启用选项 - Amazon S3 通过本地计算机复制/移动后才能使用完整的 SSL/SSE 存储桶策略重命名对象(速度较慢且需要花钱)。

由于该限制性政策,Amazon S3 内的所有复制/移动都失败了。

这意味着我们无法控制并非源自操作本地对象的应用程序的复制/移动过程。至少上面提到的 CloudBerry Options 证明了这一点。

但我可能错了,这就是我发布这个问题的原因。

  1. 就我而言,启用该存储桶策略后,S3 管理控制台变得毫无用处。用户不能创建、删除文件夹,只能上传文件。

我的存储桶策略有问题吗?我不知道那些用于对象操作的 Amazon S3 机制。

Amazon S3 是否以不同方式处理外部请求(API/http 标头)和内部请求?

是否可以将此策略仅应用于上传而不应用于内部 Amazon S3 GET/PUT 等?我尝试使用存储桶 URL 的 http referer 无济于事。

具有 SSL/SSE 要求的存储桶策略对于我的实施是强制性的。

任何想法,将不胜感激。

先感谢您。

4

1 回答 1

1

IMHO There is no way to automatically tell Amazon S3 to turn on SSE for every PUT requests. So, what I would investigate is the following :

  • write a script that list your bucket

  • for each object, get the meta data

  • if SSE is not enabled, use the PUT COPY API (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) to add SSE "(...) When copying an object, you can preserve most of the metadata (default) or specify new metadata (...)"

  • If the PUT operation succeeded, use the DELETE object API to delete the original object

Then run that script on an hourly or daily basis, depending on your business requirements. You can use S3 API in Python (http://boto.readthedocs.org/en/latest/ref/s3.html) to make it easier to write the script.

If this "change-after-write" solution is not valid for you business wise, you can work at different level

  • use a proxy between your API client and S3 API (like a reverse proxy on your site), and configure it to add the SSE HTTP header for every PUT / POST requests. Developer must go through the proxy and not be authorised to issue requests against S3 API endpoints

  • write a wrapper library to add the SSE meta data automatically and oblige developer to use your library on top of the SDK.

The later today are a matter of discipline in the organisation, as it is not easy to enforce them at a technical level.

Seb

于 2014-04-23T15:47:41.597 回答