0

我对 AWS S3 非常陌生,并尝试通过分块过程上传大文件。从 UI,我将文件的分块数据(blob)发送到 WCF 服务,我将使用 MultiPartAPI 将其上传到 S3。请注意,该文件可以以 GB 为单位。这就是我制作文件块并将其上传到 S3 的原因。

public UploadPartResponse UploadChunk(Stream stream, string fileName, string uploadId, List<PartETag> eTags, int partNumber, bool lastPart)
{
    stream.Position = 0; // Throwing Exceptions

    //Step 1: build and send a multi upload request
    if (partNumber == 1)
    {
        var initiateRequest = new InitiateMultipartUploadRequest
        {
            BucketName = _settings.Bucket,
            Key = fileName
        };

        var initResponse = _s3Client.InitiateMultipartUpload(initiateRequest);
        uploadId = initResponse.UploadId;
    }

    //Step 2: upload each chunk (this is run for every chunk unlike the other steps which are run once)
    var uploadRequest = new UploadPartRequest
                        {
                            BucketName = _settings.Bucket,
                            Key = fileName,
                            UploadId = uploadId,
                            PartNumber = partNumber,
                            InputStream = stream,
                            IsLastPart = lastPart,
                            PartSize = stream.Length // Throwing Exceptions
                        };

    var response = _s3Client.UploadPart(uploadRequest);

    //Step 3: build and send the multipart complete request
    if (lastPart)
    {
        eTags.Add(new PartETag
        {
            PartNumber = partNumber,
            ETag = response.ETag
        });

        var completeRequest = new CompleteMultipartUploadRequest
        {
            BucketName = _settings.Bucket,
            Key = fileName,
            UploadId = uploadId,
            PartETags = eTags
        };

        try
        {
            _s3Client.CompleteMultipartUpload(completeRequest);
        }
        catch
        {
            //do some logging and return null response
            return null;
        }
    }

    response.ResponseMetadata.Metadata["uploadid"] = uploadRequest.UploadId;
    return response;
}

在这里,stream.Position = 0stream.Length抛出如下异常:

在 System.ServiceModel.Dispatcher.StreamFormatter.MessageBodyStream.get_Length()

然后我看到了stream.CanSeekfalse

我是否需要实际缓冲整个流,提前将其加载到内存中以使其工作?

更新:我在下面做,它正在工作,但不知道它是否有效。

    var ms = new MemoryStream();
    stream.CopyTo(ms);
    ms.Position = 0;

有没有其他方法可以做到这一点?提前致谢。

4

2 回答 2

1

有点晚了,但请注意 TransferUtility 直接支持流:

https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileDotNet.html

于 2020-11-23T12:22:03.067 回答
1

这是一种公平的做法,但我选择了另一种方法,即使用签名 URL 直接上传到 S3。这样做的好处是减轻了服务器的一些负载,并减少了数据传输。

根据您的应用程序,可能值得考虑:

在 C# 中获取预签名 URL:

public string GetPreSignedUrl(string bucketName, string keyPrefix, string fileName)
{
    var client = new AmazonS3Client(_credentials, _region);
    var keyName = $"{keyPrefix}/{fileName}";
    var preSignedUrlRequest = new GetPreSignedUrlRequest()
    {
        BucketName = bucketName,
        Key = keyName,
        Expires = DateTime.Now.AddMinutes(5),
        Protocol = (Protocol.HTTPS)
    };
    return client.GetPreSignedURL(preSignedUrlRequest);
}

这将为客户端创建一个 URL,以直接上传到 S3,您需要将其传递给 UI。然后,您可以使用分段上传到预签名的 url。

这是使用 axious 进行分段上传的一个很好的例子:https ://github.com/prestonlimlianjie/aws-s3-multipart-presigned-upload/blob/master/frontend/pages/index.js

于 2019-03-08T17:30:02.963 回答