17

My approach was to create hundred thousand local collections and populate them with random strings, something like this:

    SecureRandom random = new SecureRandom();
    for(int i = 0 ; i < 100000 ; i++){
        HashMap<String, String> map = new HashMap<String, String>();
        for(int j = 0 ; j < 30 ; j++){
            map.put(new BigInteger(130, random).toString(32), new BigInteger(130, random).toString(32));
        }
    }

I have provided -XX:+UseGCOverheadLimit jvm parameter too, but can not get the error. Is there any easy and reliable way/hack to get this error?

4

3 回答 3

13

Since you haven't accepted any answer, I'll assume that none of them have worked for you. Here's one that will. But first, a review of the conditions that trigger this error:

The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered

So, you have to consume almost all of the heap, keep it allocated, and then allocate lots of garbage. Putting lots of stuff into a Map isn't going to do this for you.

public static void main(String[] argv)
throws Exception
{
    List<Object> fixedData = consumeAvailableMemory();
    while (true)
    {
        Object data = new byte[64 * 1024 - 1];
    }
}


private static List<Object> consumeAvailableMemory()
throws Exception
{
    LinkedList<Object> holder = new LinkedList<Object>();
    while (true)
    {
        try
        {
            holder.add(new byte[128 * 1024]);
        }
        catch (OutOfMemoryError ex)
        {
            holder.removeLast();
            return holder;
        }
    }
}

The consumeAvailableMemory() method fills up the heap with relatively small chunks of memory. "Relatively small" is important because the JVM will put "large" objects (512k bytes in my experience) directly into the tenured generation, leaving the young generation empty.

After I've consumed most of the heap, I just allocate and discard. The smaller block size in this phase is important: I know that I'll have enough memory for at least one allocation, but probably not more than two. This will keep the GC active.

Running this produces the desired error in under a second:

> java -Xms1024m -Xmx1024m GCOverheadTrigger
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at GCOverheadTrigger.main(GCOverheadTrigger.java:12)

And, for completeness, here's the JVM that I'm using:

> java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)

And now my question for you: why in the world would you want to do this?

于 2013-06-21T12:25:13.843 回答
2

This:

HashMap<String, String> map = new HashMap<String, String>();

is scoped within the loop and there are no external (long-term) references to the map created as the loop iterates. Hence each map will be eligible for garbage collection at the end of each loop iteration.

You need to create a collection of objects outside the loop, and use the loop to populate that collection.

于 2013-06-14T16:11:12.913 回答
0

I think this should do the trick ... if you run it long enough:

HashMap<Long, String> map = new HashMap<Long, String>();
for (long i = 0; true; i++) {
    for (int j = 0; j < 100; j++) {
        String s = "" + j;
        map.put(i, s);
    }
}

What I'm doing is slowly building up the amount of the non-garbage, while creating a significant amount of garbage at the same time. If this is run until the non-garbage fills nearly all of the heap, the GC will get to a point where the % of time spent garbage collecting exceeds the threshold.

于 2013-06-14T16:23:33.687 回答