1

我正在使用 jnr-fuse 库(https://github.com/SerCeMan/jnr-fuse)在 java 中编写 Fuse-Filesystem,该库在内部使用 JNR 进行本机访问。

该文件系统用作 Amazon S3 存储桶的前端,基本上使用户能够将其存储桶安装为普通存储设备。

在修改我的读取方法时,我遇到了以下 JVM 错误:

*** Error in `/usr/local/bin/jdk1.8.0_65/bin/java': double free or corruption (!prev): 0x00007f3758953d80 ***

尝试将文件从 fuse-filesystem 复制到本地 FS 时总是会发生错误,通常在第二次调用 read 方法时(对于第二个 128kByte 数据块)

 cp /tmp/fusetest/benchmark/benchmarkFile.large /tmp

有问题的读取方法是:

public int read(String path, Pointer buf, @size_t long size, @off_t long offset, FuseFileInfo fi) {
    LOGGER.debug("Reading file {}, offset = {}, read length = {}", path, offset, size);
    S3fsNodeInfo nodeInfo;
    try {
        nodeInfo = this.dbHelper.getNodeInfo(S3fsPath.fromUnixPath(path));
    } catch (FileNotFoundException ex) {
        LOGGER.error("Read called on non-existing node: {}", path);
        return -ErrorCodes.ENOENT();
    }
    try {
        // *** important part start
        InputStream is = this.s3Helper.getInputStream(nodeInfo.getPath(), offset, size);
        byte[] data = new byte[is.available()];
        int numRead = is.read(data, 0, (int) size);
        LOGGER.debug("Got {} bytes from stream, putting to buffer", numRead);
        buf.put(offset, data, 0, numRead);
        return numRead;
        // *** important part end
    } catch (IOException ex) {
        LOGGER.error("Error while reading file {}", path, ex);
        return -ErrorCodes.EIO();
    }
}

使用的输入流实际上是缓冲区上的 ByteArrayInputStream,我使用它来减少与 S3 的 http 通信。我现在在单线程模式下运行 fuse 以避免任何与并发相关的问题。

有趣的是,我已经有了一个没有进行任何内部缓存的工作版本,但其他方面与此处显示的完全相同。

不幸的是,我并不真正了解 JVM 内部,所以我不确定如何深入了解这一点 - 正常调试不会产生任何结果,因为实际错误似乎发生在 C 端。

以下是上述命令触发的读取操作的完整控制台输出:

2016-02-29 02:08:45,652 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Reading file /benchmark/benchmarkFile.large, offset = 0, read length = 131072
unique: 7, opcode: READ (15), nodeid: 3, insize: 80, pid: 8297
read[0] 131072 bytes from 0 flags: 0x8000
2016-02-29 02:08:46,024 DEBUG s3fs.fs.CachedS3Helper [main] - Getting data from cache - path = /benchmark/benchmarkFile.large, offset = 0, length = 131072
2016-02-29 02:08:46,025 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large not yet in cache, add it
2016-02-29 02:08:57,178 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large found in cache!
   read[0] 131072 bytes from 0
   unique: 7, success, outsize: 131088
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CachedS3Helper [main] - Starting actual cache read for path /benchmark/benchmarkFile.large
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CachedS3Helper [main] - Reading data from cache block 0, blockOffset = 0, length = 131072
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Got 131072 bytes from stream, putting to buffer
2016-02-29 02:08:57,180 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Reading file /benchmark/benchmarkFile.large, offset = 131072, read length = 131072
unique: 8, opcode: READ (15), nodeid: 3, insize: 80, pid: 8297
read[0] 131072 bytes from 131072 flags: 0x8000
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Getting data from cache - path = /benchmark/benchmarkFile.large, offset = 131072, length = 131072
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large found in cache!
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Starting actual cache read for path /benchmark/benchmarkFile.large
2016-02-29 02:08:57,571 DEBUG s3fs.fs.CachedS3Helper [main] - Reading data from cache block 0, blockOffset = 131072, length = 131072
2016-02-29 02:08:57,571 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Got 131072 bytes from stream, putting to buffer
   read[0] 131072 bytes from 131072
   unique: 8, success, outsize: 131088
*** Error in `/usr/local/bin/jdk1.8.0_65/bin/java': double free or corruption (!prev): 0x00007fcaa8b30c80 ***
4

1 回答 1

2

好吧,这真是一个愚蠢的错误……

buf.put(offset, data, 0, numRead);

当然是胡说八道-传递的偏移量参数表示正在读取的文件中的偏移量,而不是缓冲区中的偏移量。

适用于:

buf.put(0, data, 0, numRead);

相当神秘的错误只是意味着我正在尝试写入在这种情况下我没有业务写入的内存位置。很好奇为什么它是这个错误消息而不是我通常在这里期望的段错误..

于 2016-02-29T03:32:16.540 回答