0

I am trying to load a dataset to GraphDB 7.0. I wrote a Python script to transform and load the data on Sublime Text 3. The program suddenly stopped working and closed, the computer threatened to restart but didn't, and I lost several hours worth of computing as GraphDB doesn't let me query the inserts. This is the error I get on GraphDB:

The currently selected repository cannot be used for queries due to an error:

org.openrdf.repository.RepositoryException: java.lang.RuntimeException: There is not enough memory for the entity pool to load: 65728645 bytes are required but there are 0 left. Maybe cache-memory/tuple-index-memory is too big.

I set the JVM as follows:

-Xms8g
-Xmx9g

I don't exactly remember what I set as the values for the cache and index memories. How do I resolve this issue?

For the record, the database I need to parse has about 300k records. The program shut shop at about 50k. What do I need to do to resolve this issue?

4

1 回答 1

2

Open the workbench and check the amount of memory you have given to cache memory.

Xmx should be a value that is enough for

cache-memory + memory-for-queries + entity-pool-hash-memory

sadly the latter cannot be calculated easily because it depends on the number of entities in the repository. You will either have to:

  1. Increase the java memory with a bigger value for Xmx
  2. Decrease the value for cache memory
于 2016-05-19T13:51:17.980 回答