0

When writing a file to S3 using S3FS, if that file is accessed while writing to it, the data in the file is deleted.

We had a Red Hat Linux server on which we kept a product we were beta testing when we noticed this issue. When we went to fix the issue, we moved that product to an Ubuntu instance and we no longer have that issue.

We set up a server for a client that wanted Red Hat and moved some code to that server and that server is now having the overwrite issues.

4

1 回答 1

0

你描述的行为是有道理的。需要对 S3 与标准卷的工作方式进行一些解释。

操作系统可以在块级别读取/写入标准卷。多个进程可以访问该文件,但需要一些锁来防止损坏。

S3 将操作视为整个文件。文件要么全部上传,要么根本不存在。

s3fs 尝试为非卷创建接口,以便您可以将其挂载到文件系统上。但在幕后,它将您访问的每个文件复制到本地文件系统并将其存储在临时目录中。虽然您通常可以使用 s3fs 执行整个文件操作(复制、删除等),但尝试直接从 s3fs 打开文件到块级操作将会很糟糕。

还有其他选择。如果您可以重新编写代码以从 s3 拉取和推送文件,则可以工作,但听起来您需要一些行为更像 NFS 的东西。

于 2015-03-30T17:34:35.170 回答