我有一个非常大的双精度数组,我使用基于磁盘的文件和 MappedByteBuffers 的分页列表来处理,有关更多背景信息,请参阅此问题。我正在使用 Java 1.5 在 Windows XP 上运行。
这是我的代码的关键部分,它针对文件分配缓冲区...
try
{
// create a random access file and size it so it can hold all our data = the extent x the size of a double
f = new File(_base_filename);
_filename = f.getAbsolutePath();
_ioFile = new RandomAccessFile(f, "rw");
_ioFile.setLength(_extent * BLOCK_SIZE);
_ioChannel = _ioFile.getChannel();
// make enough MappedByteBuffers to handle the whole lot
_pagesize = bytes_extent;
long pages = 1;
long diff = 0;
while (_pagesize > MAX_PAGE_SIZE)
{
_pagesize /= PAGE_DIVISION;
pages *= PAGE_DIVISION;
// make sure we are at double boundaries. We cannot have a double spanning pages
diff = _pagesize % BLOCK_SIZE;
if (diff != 0) _pagesize -= diff;
}
// what is the difference between the total bytes associated with all the pages and the
// total overall bytes? There is a good chance we'll have a few left over because of the
// rounding down that happens when the page size is halved
diff = bytes_extent - (_pagesize * pages);
if (diff > 0)
{
// check whether adding on the remainder to the last page will tip it over the max size
// if not then we just need to allocate the remainder to the final page
if (_pagesize + diff > MAX_PAGE_SIZE)
{
// need one more page
pages++;
}
}
// make the byte buffers and put them on the list
int size = (int) _pagesize ; // safe cast because of the loop which drops maxsize below Integer.MAX_INT
int offset = 0;
for (int page = 0; page < pages; page++)
{
offset = (int) (page * _pagesize );
// the last page should be just big enough to accommodate any left over odd bytes
if ((bytes_extent - offset) < _pagesize )
{
size = (int) (bytes_extent - offset);
}
// map the buffer to the right place
MappedByteBuffer buf = _ioChannel.map(FileChannel.MapMode.READ_WRITE, offset, size);
// stick the buffer on the list
_bufs.add(buf);
}
Controller.g_Logger.info("Created memory map file :" + _filename);
Controller.g_Logger.info("Using " + _bufs.size() + " MappedByteBuffers");
_ioChannel.close();
_ioFile.close();
}
catch (Exception e)
{
Controller.g_Logger.error("Error opening memory map file: " + _base_filename);
Controller.g_Logger.error("Error creating memory map file: " + e.getMessage());
e.printStackTrace();
Clear();
if (_ioChannel != null) _ioChannel.close();
if (_ioFile != null) _ioFile.close();
if (f != null) f.delete();
throw e;
}
分配第二个或第三个缓冲区后,出现标题中提到的错误。
我认为这与可用的连续内存有关,因此尝试了不同大小和页数的方法,但总体上没有好处。
“没有足够的存储空间来处理这个命令”到底是什么意思,如果有的话,我能做些什么呢?
我认为 MappedByteBuffers 的重点是能够处理比堆上容纳的更大的结构,并将它们视为在内存中的能力。
有什么线索吗?
编辑:
为了回应下面的答案(@adsk),我更改了我的代码,因此我在任何时候都不会有超过一个活动的 MappedByteBuffer。当我引用当前未映射的文件区域时,我会丢弃现有地图并创建一个新地图。经过大约 3 次地图操作后,我仍然遇到同样的错误。
GC 引用的错误没有收集 MappedByteBuffers 似乎仍然是 JDK 1.5 中的一个问题。