2

这就是我写 InputStream 的方法

public OutputStream getOutputStream(@Nonnull final String uniqueId) throws PersistenceException {
        final PipedOutputStream outputStream = new PipedOutputStream();
        final PipedInputStream inputStream;
        try {
            inputStream = new PipedInputStream(outputStream);
            new Thread(
                    new Runnable() {
                        @Override
                        public void run() {
                            PutObjectRequest putObjectRequest = new PutObjectRequest("haritdev.sunrun", "sample.file.key", inputStream, new ObjectMetadata());
                            PutObjectResult result = amazonS3Client.putObject(putObjectRequest);
                            LOGGER.info("result - " + result.toString());
                            try {
                                inputStream.close();
                            } catch (IOException e) {

                            }
                        }
                    }
            ).start();
        } catch (AmazonS3Exception e) {
            throw new PersistenceException("could not generate output stream for " + uniqueId, e);
        } catch (IOException e) {
            throw new PersistenceException("could not generate input stream for S3 for " + uniqueId, e);
        }
         try {
            return new GZIPOutputStream(outputStream);
        } catch (IOException e) {
            LOGGER.error(e.getMessage(), e);
            throw new PersistenceException("Failed to get output stream for " + uniqueId + ": " + e.getMessage(), e);
        }
    }

在下面的方法中,我看到我的进程死了

protected <X extends AmazonWebServiceRequest> Request<X> createRequest(String bucketName, String key, X originalRequest, HttpMethodName httpMethod) {
    Request<X> request = new DefaultRequest<X>(originalRequest, Constants.S3_SERVICE_NAME);
    request.setHttpMethod(httpMethod);
    if (bucketNameUtils.isDNSBucketName(bucketName)) {
        request.setEndpoint(convertToVirtualHostEndpoint(bucketName));
        request.setResourcePath(ServiceUtils.urlEncode(key));
    } else {
        request.setEndpoint(endpoint);

        if (bucketName != null) {
            /*
             * We don't URL encode the bucket name, since it shouldn't
             * contain any characters that need to be encoded based on
             * Amazon S3's naming restrictions.
             */
            request.setResourcePath(bucketName + "/"
                    + (key != null ? ServiceUtils.urlEncode(key) : ""));
        }
    }

    return request;
}

该过程在request.setResourcePath(ServiceUtils.urlEncode(key))上失败;因此我什至无法调试,即使密钥是有效名称并且不是NULL

有人可以帮忙吗?

这是request临死前的样子

request = {com.amazonaws.DefaultRequest@1931}"PUT https://my.bucket.s3.amazonaws.com / "
resourcePath = null
parameters = {java.util.HashMap@1959} size = 0
headers = {java.util.HashMap@1963} size = 0
endpoint = {java.net.URI@1965}"https://my.bucket.s3.amazonaws.com"
serviceName = {java.lang.String@1910}"Amazon S3"
originalRequest = {com.amazonaws.services.s3.model.PutObjectRequest@1285}
httpMethod = {com.amazonaws.http.HttpMethodName@1286}"PUT"
content = null
4

2 回答 2

1

我尝试了相同的方法,但对我来说也失败了。

我最终先将所有数据写入输出流,然后在将数据从输出流复制到输入流后开始上传到 S3:

...
// Data written to outputStream here
...
byte[] byteArray = outputStream.toByteArray();
amazonS3Client.uploadPart(new UploadPartRequest()
  .withBucketName(bucket)
  .withKey(key)
  .withInputStream(new ByteArrayInputStream(byteArray))
  .withPartSize(byteArray.length)
  .withUploadId(uploadId)
  .withPartNumber(partNumber));

如果必须在开始上传到 S3 之前将整个数据块写入并复制到内存中,这有点违背了写入流的目的,但这是我让它工作的唯一方法。

于 2013-06-14T05:46:14.030 回答
1

这是我尝试和工作的 -

  try (PipedOutputStream pipedOutputStream = new PipedOutputStream();
     PipedInputStream pipedInputStream = new PipedInputStream()) {
            new Thread(new Runnable() {

            public void run() {
                try {
                      // write some data to pipedOutputStream
                } catch (IOException e) {
                   // handle exception
                }
            }
            }).start();
       PutObjectRequest putObjectRequest = new PutObjectRequest(BUCKET, FILE_NAME, pipedInputStream, new ObjectMetadata());
       s3Client.putObject(putObjectRequest);
}

此代码与 S3 一起使用会引发警告,即未设置内容长度,并且 s3 将被缓冲并可能导致 OutOfMemoryException。我不相信任何廉价的方法来设置 ObjectMetadata 中的内容长度只是为了摆脱这个消息,希望 AWS SDK 不会为了找到内容长度而将整个流流式传输到内存中。

于 2016-11-04T23:54:57.830 回答