2

我正在尝试使用 akka-streams 和 akka-http 以及 alpakka 库将文件下载/上传到 Amazon S3。我看到两个可能相关的问题......

  • 我只能下载很小的文件,最大的一个8kb。
  • 我无法上传更大的文件。它失败并显示消息

    处理请求时出错:“子流源未在 5000 毫秒内实现”。完成 500 内部服务器错误响应。要更改默认异常处理行为,请提供自定义 ExceptionHandler。akka.stream.impl.SubscriptionTimeoutException:子流源未在 5000 毫秒内实现

这是我的路线

pathEnd {
           post {
             fileUpload("attachment") {
               case (metadata, byteSource) => {
                 val writeResult: Future[MultipartUploadResult] = byteSource.runWith(client.multipartUpload("bucketname", key))
                 onSuccess(writeResult) { result =>
                   complete(result.location.toString())
                 }
               }

             }
           }

         } ~

     path("key" / Segment) {
            (sourceSystem, sourceTable, sourceId) =>
              get {
                val result: Future[ByteString] = 
         client.download("bucketname", key).runWith(Sink.head)
                onSuccess(result) {
                  complete(_)
                }
              }
          }

尝试下载 100KB 的文件最终会获取文件的截断版本,通常大小约为 16-25Kb 任何帮助表示赞赏

编辑:对于下载问题,我接受了 Stefano 的建议并得到了

[error]  found   : akka.stream.scaladsl.Source[akka.util.ByteString,akka.NotUsed]
[error]  required: akka.http.scaladsl.marshalling.ToResponseMarshallable

这使它工作

complete(HttpEntity(ContentTypes.`application/octet-stream`, client.download("bucketname", key).runWith(Sink.head)))
4

1 回答 1

2

1)关于下载问题:通过调用

val result: Future[ByteString] = 
         client.download("bucketname", key).runWith(Sink.head)

您正在将所有数据从 S3 流式传输到内存中,然后提供结果。

Akka-Http 作为流式传输支持,允许您直接从源流式传输字节,而无需将它们全部缓冲在内存中。可以在文档中找到更多信息。实际上,这意味着complete指令可以采用Source[ByteString, _],如

...
get {
  complete(client.download("bucketname", key))
}

2)关于上传问题:您可以尝试调整 Akka HTTPakka.http.server.parsing.max-content-length设置:

# Default maximum content length which should not be exceeded by incoming request entities.
# Can be changed at runtime (to a higher or lower value) via the `HttpEntity::withSizeLimit` method.
# Note that it is not necessarily a problem to set this to a high value as all stream operations
# are always properly backpressured.
# Nevertheless you might want to apply some limit in order to prevent a single client from consuming
# an excessive amount of server resources.
#
# Set to `infinite` to completely disable entity length checks. (Even then you can still apply one
# programmatically via `withSizeLimit`.)
max-content-length = 8m

用于测试这一点的生成代码将类似于以下内容:

  withoutSizeLimit {
    fileUpload("attachment") {
      ...
    }
  }
于 2017-09-19T20:44:05.457 回答