3

TL;DR

When attempting to upload a file directly from the browser using the s3.upload() method provided by the AWS SDK for Javascript in the Browser combined with temporary IAM Credentials generated by a call to AWS.STS.getFederationToken() everything works fine for non-multipart uploads, and for the first part of a multipart upload.

But when s3.upload() attempts to send the second part of a multipart upload S3 responds with a 403 Access Denied error.

Why?



The Context

I'm implementing an uploader in my app that will enable multipart (chunked) uploads directly from the browser to my S3 bucket.

To achieve this, I'm utilizing the s3.upload() method of the AWS SDK for Javascript in the Browser, which I understand to be nothing more than sugar for its underlying utilization of new AWS.S3.ManagedUpload().

A simple illustration of what I'm attempting can be found here: https://aws.amazon.com/blogs/developer/announcing-the-amazon-s3-managed-uploader-in-the-aws-sdk-for-javascript/

Additionally, I'm also using AWS.STS.getFederationToken() as a means to vend temporary IAM User credentials from my API layer to authorize the uploads.

The 1,2,3:

  1. The user initiates an upload by choosing a file via a standard HTML <input type="file">.
  2. This triggers an initial request to my API layer to ensure the user has the necessary privileges on my own system to perform this action, if that's true then my server calls AWS.STS.getFederationToken() with a Policy param that scopes their privileges down to nothing more than uploading the file to the key provided. And then returns the resulting temporary creds to the browser.
  3. Now that the browser has the temp creds it needs, it can go about using them to create a new AWS.S3 client and then execute the AWS.S3.upload() method to perform a (supposedly) automagical multipart upload of the file.



The Code

api.myapp.com/vendUploadCreds.js

This is the API layer method called that generates and vends the temporary upload creds. At this point in the process the account has already been authenticated and authorized to receive the creds and upload the file.

module.exports = function vendUploadCreds(request, response) {

    var account = request.params.account;
    var file = request.params.file;
    var bucket = 'cdn.myapp.com';

    var sts = new AWS.STS({
        AccessKeyId : process.env.MY_AWS_ACCESS_KEY_ID,
        SecretAccessKey : process.env.MY_AWS_SECRET_ACCESS_KEY
    });

    /// The following policy is *exactly* the same as the S3 policy
    /// attached to the IAM user that executes this STS request.

    var policy = {
        Version : '2012-10-17',
        Statement : [
            {
                Effect : 'Allow',
                Action : [
                    's3:ListBucket',
                    's3:ListBucketMultipartUploads',
                    's3:ListBucketVersions',
                    's3:ListMultipartUploadParts',
                    's3:AbortMultipartUpload',
                    's3:GetObject',
                    's3:GetObjectVersion',
                    's3:PutObject',
                    's3:PutObjectAcl',
                    's3:PutObjectVersionAcl',
                    's3:DeleteObject',
                    's3:DeleteObjectVersion'
                ],
                Resource : [
                    'arn:aws:s3:::' + bucket + '/' + account._id + '/files/' + file.name
                ],
                Condition : {
                    StringEquals : {
                        's3:x-amz-acl' : ['private']
                    }
                }
            }
        ]
    };

    sts.getFederationToken({
        DurationSeconds : 129600, /// 36 hours
        Name : account._id + '-uptoken',
        Policy : JSON.stringify(policy)
    }, function(err, data) {

        if (err) console.log(err, err.stack); // an error occurred

        response.send(data);

    });

}


console.myapp.com/uploader.js

This is a truncated illustration of the uploader on the browser-side that first calls the vendUploadCreds API method and then uses the resulting temporary creds to execute the multipart upload.

uploader.getUploadCreds(account, file) {

    /// A request is sent to api.myapp.com/vendUploadCreds
    /// Upon successful response, the creds are returned.

    request('https://api.myapp.com/vendUploadCreds', {
        params : {
            account : account,
            file : file
        }
    }, function(error, data) {
        upload.credentials = data.credentials;
        this.uploadFile(upload);
    });

}

uploader.uploadFile : function(upload) {

    var uploadID = upload.id;

    /// The `upload` object coming through via the args has
    /// a `credentials` property containing the creds obtained
    /// via the `vendUploadCreds` method above.

    var credentials = new AWS.Credentials({
        accessKeyId : upload.credentials.AccessKeyId,
        secretAccessKey : upload.credentials.SecretAccessKey,
        sessionToken : upload.credentials.SessionToken
    });

    AWS.config.region = 'us-east-1';

    var s3 = new AWS.S3({
        credentials,
        signatureVersion : 'v2', /// 'v4' also attempted
        params : {
            Bucket : 'cdn.myapp.com'
        }
    });

    var uploader = s3.upload({
        Key : upload.key,
        ACL : 'private',
        ContentType : upload.file.type,
        Body : upload.file
    },{
        queueSize : 3,
        partSize : 1024 * 1024 * 5
    });

    uploader.on('httpUploadProgress', function(event) {
        var total = event.total;
        var loaded = event.loaded;
        var percent = loaded / total;
        percent = Math.ceil(percent * 100);
        console.log('Uploaded ' + percent + '% of ' + upload.key);
    });

    uploader.send(function(error, result) {
        console.log(error, result);
    });

}


cdn.myapp.com S3 Bucket CORS Configuration

So far as I can tell, this is wide open, so CORS shouldn't be the issue?

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>ETag</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>


The Error

Okay, so when I attempt to upload a file, it gets really confusing:

  1. Any file under 5Mb uploads just fine. Files under 5Mb (the minimum part size for an S3 Multipart Upload) do not require multipart upload so s3.upload() sends them as a standard PUT request. Makes sense, and they succeed just fine.
  2. Any file over 5Mb seems to upload fine, but only for the first part. Then when s3.upload() attempts to send the second part S3 responds with a 403 Access Denied error.

I hope you're a fan of info because here's a dump of the error that I get from Chrome when I attempt to upload Astrud Gilberto's melancholy classic "So Nice (Summer Samba)" (MP3, 6.6Mb):

General

Request URL:https://s3.amazonaws.com/cdn.myapp.com/5a2cbda70b9b741661ad98df/files/Astrud-Gilberto-So-Nice-1512903188573.mp3?partNumber=2&uploadId=ljaviv9n25aRKwc4HKGhBbbXTWI3wSGZwRRi39fPSEvU2dcM9G7gO6iu5w7va._dMTZil4e_b53Iy5ngojJqRr5F6Uo_ZXuF27yaqizeARmUVf5ZVeah8ZjYwkZV8C0i3rhluYoxFHUPxlLMjaKLww--
Request Method:PUT
Status Code:403 Forbidden
Remote Address:52.216.165.77:443
Referrer Policy:no-referrer-when-downgrade

Response Headers

Access-Control-Allow-Methods:GET, PUT, POST, DELETE
Access-Control-Allow-Origin:*
Access-Control-Expose-Headers:ETag
Access-Control-Max-Age:3000
Connection:close
Content-Type:application/xml
Date:Sun, 10 Dec 2017 10:53:12 GMT
Server:AmazonS3
Transfer-Encoding:chunked
Vary:Origin, Access-Control-Request-Headers, Access-Control-Request-Method
x-amz-id-2:0Mzo7b/qj0r5Is7aJIIJ/U2VxTTulWsjl5kJpTnEhy/B0fQDlRuANcursnxI71LA16AdePVSc/s=
x-amz-request-id:DA008A5116E0058F

Request Headers

Accept:*/*
Accept-Encoding:gzip, deflate, br
Accept-Language:en-US,en;q=0.9
Authorization:AWS ASIAJAR5KXKAOPTC64PQ:Wo9lbflZuVVS9+UTTDSjU0iPUbI=
Cache-Control:no-cache
Connection:keep-alive
Content-Length:1314943
Content-Type:application/octet-stream
DNT:1
Host:s3.amazonaws.com
Origin:http://132.12.23.145:8080
Pragma:no-cache
Referer:http://132.12.23.145:8080/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36
X-Amz-Date:Sun, 10 Dec 2017 10:53:09 GMT
x-amz-security-token:FQoDYXdzENT//////////wEaDK9srK2+5FN91W+T+SLSA/LdEwpOiY7wDkgggOMhuGEiqIXAQrFMk/EqvZFl8Npqx414WsL9E310rj5mU1RGXsxuN+ers1r6NVPpJIlXSDG7bnwlGabejNvDL9vMX5HJHGbZOEVUoaL60/T5NM+0TZtH61vHAEVmRVFKOB0tSez8TEU1jQ2cJME0THn5RuV/6CuIpA9dlEYO7/ajB5UKT3F1rBkt12b0DeWmKG2pvTJRwa8nrsF6Hk6dk1B1Hl1fUwAh9rD17O9Roi7MFLKisPH+96WX08liC8k+n+kPPOox6ZZM/lOMwlNinDjLc2iC+JD/6uxyAGpNbQ7OHAUsF7DOiMvw6Nv6PrImrBvnK439BhLOk1VXCfxxmtTWGim8TD1w1EciZcJhsuCMpDF8fMnhF/JFw3KNOJXHUtpTGRjNbOPcPojVs3FgIt+9MllIA0pGMr2bYmA3HvKewnhD2qeKkG3DPDIbpwuRoY4wIXCP5OclmoHp5nE5O94aRIvkBvS1YmqDQO+jTiI7/O7vlX63q9sGqdIA4nwzh5ASTRJhC2rKgxepFirEB53dCev8i9f1pwXG3/4H3TvPCLVpK94S7/csNJexJP75bPBpo4nDeIbOBKKIMuUDK1pQsyuGwuUolKS00QU=
X-Amz-User-Agent:aws-sdk-js/2.164.0 callback

Query String Params

partNumber:2
uploadId:ljaviv9n25aRKwc4HKGhBbbXTWI3wSGZwRRi39fPSEvU2dcM9G7gO6iu5w7va._dMTZil4e_b53Iy5ngojJqRr5F6Uo_ZXuF27yaqizeARmUVf5ZVeah8ZjYwkZV8C0i3rhluYoxFHUPxlLMjaKLww--

Actual Response Body

And here's the body of the response from S3:

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>8277A4969E955274</RequestId><HostId>XtQ2Ezv0Wa81Rm2jymB5ZwTe+OHfwTcnNapYMgceqZCJeb75YwOa1AZZ5/10CAeVgmfeP0BFXnM=</HostId></Error>


The Questions

  1. It's obviously not an issue with the creds created by the sts.generateFederationToken() request, because if it were then the smaller (non-multipart) uploads would fail as well, right?
  2. It's obviously not an issue with the CORS configuration on the cdn.myapp.com bucket, because if it were then the smaller (non-multipart) uploads would fail as well, right?
  3. Why would S3 accept partNumber=1 of a multipart upload, and then 403 on the partNumber=2 of the same upload?
4

1 回答 1

4

A Solution

After many hours of wrestling with this I figured out that the issue was with the Condition block of the IAM Policy that I was sending through as the Policy param of my AWS.STS.getFederationToken() request. Specifically, AWS.S3.upload() only sends an x-amz-acl header for the first PUT request, which is the call to S3.initiateMultipartUpoad.

The x-amz-acl header is not included in the subsequent PUT requests for the actual parts of the upload.

I had the following condition on my IAM Policy, which I was using to ensure that any uploads must have an ACL of 'private':

Condition : {
    StringEquals : {
        's3:x-amz-acl' : ['private']
    }
}

So the initial PUT request to S3.initiateMultipartUpload was fine, but the subsequent PUTs failed because they didn't have the x-amz-acl header.

The solution was to edit the policy I was attaching to the temporary user and move the s3:PutObject permission into its own statement, and then adjust the condition to apply only if the targeted value exists. The final policy looks like so:

var policy = {
    Version : '2012-10-17',
    Statement : [
        {
            Effect : 'Allow',
            Action : [
                's3:PutObject'
            ],
            Resource : [
                'arn:aws:s3:::' + bucket + '/' + account._id + '/files/' + file.name
            ],
            Condition : {
                StringEqualsIfExists : {
                    's3:x-amz-acl' : ['private']
                }
            }
        },
        {
            Effect : 'Allow',
            Action : [
                's3:AbortMultipartUpload'
            ],
            Resource : [
                'arn:aws:s3:::' + bucket + '/' + account._id + '/files/' + file.name
            ]
        }
    ]
};

Hopefully that'll help someone else from wasting three days on this.

于 2017-12-11T04:45:58.747 回答