-1

我正在尝试做一些性能增强,并希望使用内存映射文件来写入数据。我做了一些测试,令人惊讶的是,MappedByteBuffer 似乎比分配直接缓冲区慢。我无法清楚地理解为什么会这样。有人可以暗示幕后可能发生的事情吗?下面是我的测试结果:

我正在分配 32KB 缓冲区。在开始测试之前,我已经创建了大小为 3Gigs 的文件。因此,增长文件不是问题。

测试结果 DirectBuffer vs MappedByteBuffer

我正在添加用于此性能测试的代码。非常感谢有关此行为的任何输入/解释。

import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.FileChannel.MapMode;

public class MemoryMapFileTest {

    /**
     * @param args
     * @throws IOException 
     */
    public static void main(String[] args) throws IOException { 

        for (int i = 0; i < 10; i++) {
            runTest();
        }

    }   

    private static void runTest() throws IOException {  

        // TODO Auto-generated method stub
        FileChannel ch1 = null;
        FileChannel ch2 = null;
        ch1 = new RandomAccessFile(new File("S:\\MMapTest1.txt"), "rw").getChannel();
        ch2 = new RandomAccessFile(new File("S:\\MMapTest2.txt"), "rw").getChannel();

        FileWriter fstream = new FileWriter("S:\\output.csv", true);
        BufferedWriter out = new BufferedWriter(fstream);


        int[] numberofwrites = {1,10,100,1000,10000,100000};
        //int n = 10000;
        try {
            for (int j = 0; j < numberofwrites.length; j++) {
                int n = numberofwrites[j];
                long estimatedTime = 0;
                long mappedEstimatedTime = 0;

                for (int i = 0; i < n ; i++) {
                    byte b = (byte)Math.random();
                    long allocSize = 1024 * 32;

                    estimatedTime += directAllocationWrite(allocSize, b, ch1);
                    mappedEstimatedTime += mappedAllocationWrite(allocSize, b, i, ch2);

                }

                double avgDirectEstTime = (double)estimatedTime/n;
                double avgMapEstTime = (double)mappedEstimatedTime/n;
                out.write(n + "," + avgDirectEstTime/1000000 + "," + avgMapEstTime/1000000);
                out.write("," + ((double)estimatedTime/1000000) + "," + ((double)mappedEstimatedTime/1000000));
                out.write("\n");
                System.out.println("Avg Direct alloc and write: " + estimatedTime);
                System.out.println("Avg Mapped alloc and write: " + mappedEstimatedTime);

            }


        } finally {
            out.write("\n\n"); 
            if (out != null) {
                out.flush();
                out.close();
            }

            if (ch1 != null) {
                ch1.close();
            } else {
                System.out.println("ch1 is null");
            }

            if (ch2 != null) {
                ch2.close();
            } else {
                System.out.println("ch2 is null");
            }

        }
    }


    private static long directAllocationWrite(long allocSize, byte b, FileChannel ch1) throws IOException {
        long directStartTime = System.nanoTime();
        ByteBuffer byteBuf = ByteBuffer.allocateDirect((int)allocSize);
        byteBuf.put(b);
        ch1.write(byteBuf);
        return System.nanoTime() - directStartTime;
    }

    private static long mappedAllocationWrite(long allocSize, byte b, int iteration, FileChannel ch2) throws IOException {
        long mappedStartTime = System.nanoTime();
        MappedByteBuffer mapBuf = ch2.map(MapMode.READ_WRITE, iteration * allocSize, allocSize);
        mapBuf.put(b);
        return System.nanoTime() - mappedStartTime;
    }

}
4

2 回答 2

7

你在测试错误的东西。这不是在任何一种情况下编写代码的方式。您应该分配一次缓冲区,然后继续更新其内容。您在写入时间中包括分配时间。无效。

于 2013-05-28T23:23:11.937 回答
0

将数据交换到磁盘是 MappedByteBuffer 比 DirectByteBuffer 慢的主要原因。直接缓冲区(包括 MappedByteBuffer)的分配和释放成本很高,这是两个示例都会产生的成本,因此写入磁盘的唯一区别是 MappedByteBuffer 的情况,但 Direct Byte Buffer 不是

于 2016-03-09T12:29:43.733 回答