2

我必须将文件从 API 端点传输到两个不同的存储桶。原始上传是使用:

curl -X PUT -F "data=@sample" "http://localhost:3000/upload/1/1"

上传文件的端点:

const PassThrough = require('stream').PassThrough;

async function uploadFile (req, res) {
  try {
    const firstS3Stream = new PassThrough();
    const secondS3Stream = new PassThrough();
    req.pipe(firstS3Stream);
    req.pipe(secondS3Stream);

    await Promise.all([
      uploadToFirstS3(firstS3Stream),
      uploadToSecondS3(secondS3Stream),
    ]);
    return res.end();
  } catch (err) {
    console.log(err)
    return res.status(500).send({ error: 'Unexpected error during file upload' });
  }
}

如您所见,我使用了两个PassThrough流,以便将请求流复制到两个可读流中,正如此 SO thread 中所建议的那样

这段代码保持不变,这里有趣的是uploadToFirstS3anduploadToSecondS3函数。在这个最小的例子中,两者都使用不同的配置做完全相同的事情,我将在这里只使用一个。

什么效果好:

const aws = require('aws-sdk');

const s3 = new aws.S3({
  accessKeyId: S3_API_KEY,
  secretAccessKey: S3_API_SECRET,
  region: S3_REGION,
  signatureVersion: 'v4',
});

const uploadToFirstS3 = (stream) => (new Promise((resolve, reject) => {
  const uploadParams = {
    Bucket: S3_BUCKET_NAME,
    Key: 'some-key',
    Body: stream,
  };
  s3.upload(uploadParams, (err) => {
    if (err) reject(err);
    resolve(true);
  });
}));

这段代码(基于aws-sdk包)工作正常。我的问题是我希望它与@aws-sdk/client-s3包一起运行,以减小项目的大小。

什么不起作用:

我首先尝试使用S3Client.send(PutObjectCommand)

const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');

const s3 = new S3Client({
  credentials: {
    accessKeyId: S3_API_KEY,
    secretAccessKey: S3_API_SECRET,
  },
  region: S3_REGION,
  signatureVersion: 'v4',
});

const uploadToFirstS3 = (stream) => (new Promise((resolve, reject) => {
  const uploadParams = {
    Bucket: S3_BUCKET_NAME,
    Key:'some-key',
    Body: stream,
  };
  s3.send(new PutObjectCommand(uploadParams), (err) => {
    if (err) reject(err);
    resolve(true);
  });
}));

然后我尝试了 S3.putObject(PutObjectCommandInput)

const { S3 } = require('@aws-sdk/client-s3');

const s3 = new S3({
  credentials: {
    accessKeyId: S3_API_KEY,
    secretAccessKey: S3_API_SECRET,
  },
  region: S3_REGION,
  signatureVersion: 'v4',
});

const uploadToFirstS3 = (stream) => (new Promise((resolve, reject) => {
  const uploadParams = {
    Bucket: S3_BUCKET_NAME,
    Key:'some-key',
    Body: stream,
  };
  s3.putObject(uploadParams, (err) => {
    if (err) reject(err);
    resolve(true);
  });
}));

最后两个示例都给了我一个501-Not Implemented错误的标题Transfer-Encoding。我检查了一下req.headers,里面没有Transfer-Encoding,所以我猜是 sdk 在请求中添加了 s3 ?

由于第一个示例(基于aws-sdk)工作正常,我确信错误不是由于请求中的空正文,如此 SO thread中所建议的那样。

尽管如此,我认为触发上传时流可能还不可读,因此我将调用uploadToFirstS3和事件uploadToSecondS3触发的回调包装起来req.on('readable', callback),但没有任何改变。

我想随时处理内存中的文件而不将其存储在磁盘上。有没有办法使用@aws-sdk/client-s3包来实现它?

4

1 回答 1

4

在 S3 中,您可以使用Uploadfrom 类@aws-sdk/lib-storage进行分段上传。不幸的是,似乎文档站点中可能没有提到这一点@aws-sdk/client-s3

它在这里的升级指南中提到:https ://github.com/aws/aws-sdk-js-v3/blob/main/UPGRADING.md#s3-multipart-upload

这是https://github.com/aws/aws-sdk-js-v3/tree/main/lib/lib-storage中提供的示例:

  import { Upload } from "@aws-sdk/lib-storage";
  import { S3Client } from "@aws-sdk/client-s3";

  const target = { Bucket, Key, Body };
  try {
    const parallelUploads3 = new Upload({
      client: new S3Client({}),
      tags: [...], // optional tags
      queueSize: 4, // optional concurrency configuration
      leavePartsOnError: false, // optional manually handle dropped parts
      params: target,
    });

    parallelUploads3.on("httpUploadProgress", (progress) => {
      console.log(progress);
    });

    await parallelUploads3.done();
  } catch (e) {
    console.log(e);
  }
于 2021-11-29T18:11:28.597 回答