我今天无法将(低级别)分段上传工作到 Amazon S3,尽管它已经工作了很多很多个月,基本上完美无缺。在最后一天左右,它开始失败。我有一个 80 多个文件上传的队列,直到大约 60 个文件之后都很好,然后它失败的次数多于成功的次数。从那时起,即使是单个文件队列也失败了。
我使用的代码与文档中的低级分段上传示例中的代码基本相同,除了使用 do-while 循环,如果失败将重试上传单个部分。只有成功的部分上传才会添加到列表中,后者会作为 CompleteMultipartUploadRequest 的一部分添加。
尽管没有任何部分上传失败;只有在所有部分上传后发送的 CompleteMultipartUploadRequest。这是我在每次失败时看到的唯一异常,它总是源于 CompleteMultipartUpload 请求。
我什至将 CompleteMultipartUpload 对象的创建和请求包装在一个循环中,以防 S3 “准备好”将这些部分连接在一起出现问题,但即使是具有显着时间延迟的渐进式退避期也无济于事。
Exception: Maximum number of retry attempts reached : 3
Exception: at Amazon.S3.AmazonS3Client.pauseOnRetry(Int32 retries, Int32 maxRetries, HttpStatusCode status, String requestAddr, WebHeaderCollection headers, Exception cause)
at Amazon.S3.AmazonS3Client.handleRetry(S3Request userRequest, HttpWebRequest request, WebHeaderCollection respHdrs, Int64 orignalStreamPosition, Int32 retries, HttpStatusCode statusCode, Exception cause)
at Amazon.S3.AmazonS3Client.getResponseCallback[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.endOperation[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.EndCompleteMultipartUpload(IAsyncResult asyncResult)
at Amazon.S3.AmazonS3Client.CompleteMultipartUpload(CompleteMultipartUploadRequest request)
这是下面的代码。关于什么可能是错的任何建议?
// List to store upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
List<PartETag> uploadPartETags = new List<PartETag>();
// 1. Initialize.
InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest()
.WithBucketName(s3bucketName)
.WithKey(key);
initResponse = s3Client.InitiateMultipartUpload(initiateRequest);
bwLogUploadFiles("multipart upload ID " + initResponse.UploadId);
// 2. Upload Parts.
uploadFileSize = new FileInfo(sourceFilepath).Length;
uploadTypicalPartSize = PART_SIZE_DEFAULT; // 5 MB
uploadNumParts = uploadFileSize / uploadTypicalPartSize + 1;
Debug.WriteLine("# of parts: " + uploadNumParts);
int retryCount = 0;
long filePosition = 0;
for (int i = 1; filePosition < uploadFileSize; i++)
{
uploadCurrentPart = i;
//long percent = (100 * filePosition) / uploadFileSize;
//reportUploadProgress((int)percent, filePosition);
bwLogUploadFiles("upload part " + i + " of " + uploadNumParts);
retryCount = 0;
// make the part size exactly equal to the lesser of the part size (5MB) or the remaining amount
//long tmpPartSize = Math.Min(uploadTypicalPartSize, (uploadFileSize - filePosition));
// per documentation examples, just make part size the same every time, even if remaining file length is bigger
long tmpPartSize = uploadTypicalPartSize;
// Create request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.WithBucketName(s3bucketName)
.WithKey(key)
.WithUploadId(initResponse.UploadId)
.WithPartNumber(i)
.WithPartSize(tmpPartSize)
.WithFilePosition(filePosition)
.WithFilePath(sourceFilepath)
.WithSubscriber(transferUtilityUploadSubscriberLowLevel)
.WithReadWriteTimeout(PART_TIMEOUT)
.WithTimeout(UPLOAD_TIMEOUT);
UploadPartResponse resp = null;
// repeat the part upload until it succeeds.
Boolean anotherPass;
do
{
anotherPass = false; // assume everythings ok
try {
// Upload part
resp = s3Client.UploadPart(uploadRequest);
// add response to our list.
uploadResponses.Add(resp);
// only creating PartETag and adding to a list for testing a different way of constructing the CompleteMultipartUploadRequest at the end.
PartETag petag = new PartETag(resp.PartNumber, resp.ETag);
uploadPartETags.Add(petag);
bwLogUploadFiles("upload part " + resp.PartNumber + " of " + uploadNumParts + " success. Part ETag "+resp.ETag);
}
catch (Exception e)
{
anotherPass = true; // repeat
retryCount++;
Debug.WriteLine(e.Message +": retry part #" + i);
bwLogUploadFiles("upload part " + i + " of " + uploadNumParts + " FAIL. Will retry if attempt #" + retryCount + "<"uploading part #"+i+" couldn't upload after "+MAX_RETRIES+" attempts. Upload failed");
filePosition += tmpPartSize;
}
//reportUploadProgress(100, uploadFileSize);
// Step 3: complete.
Boolean retryCompleteRequest = true;
Boolean completeSuccess = false;
int completeAttempts = 0;
const int delaySecondsMultiple = 3;
// retry a few times in case it's just a timing or S3 sync or readiness issue. Maybe giving it some time make following the part uploads will do the trick
do
{
retryCompleteRequest = false;
try
{
bwLogUploadFiles("complete the multipart upload, attempt #"+(completeAttempts+1) );
if (completeAttempts >0)
{
bwLogUploadFiles("delay " + (delaySecondsMultiple * completeAttempts) + " seconds");
Thread.Sleep(delaySecondsMultiple * 1000 * completeAttempts); //
}
Debug.WriteLine("now complete the Mulitpart Upload Request");
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest()
.WithBucketName(s3bucketName)
.WithKey(key)
.WithUploadId(initResponse.UploadId)
//.WithPartETags(uploadResponses) // historically we've been attaching a List<UploadPartResponse>
.WithPartETags(uploadPartETags); // for testing we're trying List<PartETag>
CompleteMultipartUploadResponse completeUploadResponse = s3Client.CompleteMultipartUpload(completeRequest);
completeSuccess = true;
}
catch (Exception e)
{
completeAttempts++;
retryCompleteRequest = true;
Console.WriteLine("Exception occurred: {0}", e.Message);
Console.WriteLine(e.StackTrace);
bwLogUploadFiles("Exception: " + e.Message);
bwLogUploadFiles("Exception: " + e.StackTrace);
}
}
while (retryCompleteRequest && completeAttempts < MAX_RETRIES);