2

我尝试了 gRPC,但是 gRPC 使用 proto-buf 不可变消息对象,我遇到了很多类似 OOM

Exception in thread "grpc-default-executor-68" java.lang.OutOfMemoryError: Direct buffer memory
    at java.nio.Bits.reserveMemory(Bits.java:658)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
    at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:645)
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:228)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:204)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:132)
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262)
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:157)
    at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:93)
    at io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:66)
    at io.grpc.internal.MessageFramer.writeKnownLength(MessageFramer.java:182)
    at io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:135)
    at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:125)
    at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:165)
    at io.grpc.internal.AbstractServerStream.writeMessage(AbstractServerStream.java:108)
    at io.grpc.internal.ServerImpl$ServerCallImpl.sendMessage(ServerImpl.java:496)
    at io.grpc.stub.ServerCalls$ResponseObserver.onNext(ServerCalls.java:241)
    at play.bench.BenchGRPC$CounterImpl$1.onNext(BenchGRPC.java:194)
    at play.bench.BenchGRPC$CounterImpl$1.onNext(BenchGRPC.java:191)
    at io.grpc.stub.ServerCalls$2$1.onMessage(ServerCalls.java:191)
    at io.grpc.internal.ServerImpl$ServerCallImpl$ServerStreamListenerImpl.messageRead(ServerImpl.java:546)
    at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1.run(ServerImpl.java:417)
    at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
io.grpc.StatusRuntimeException: CANCELLED
    at io.grpc.Status.asRuntimeException(Status.java:430)
    at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:266)
    at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:320)
    at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

我不确定这是不是由对象创建引起的,我给这个过程5G mem,仍然OOM,需要一些帮助。

编辑

我把我的 bench、proto、dependencies 和 example 放到了这个gist中,问题是内存非常高,迟早会导致 OOME,还有一个奇怪的 NPE

严重: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$2@312546d9
java.lang.NullPointerException
    at io.netty.buffer.PoolChunk.initBufWithSubpage(PoolChunk.java:378)
    at io.netty.buffer.PoolChunk.initBufWithSubpage(PoolChunk.java:369)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:194)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:132)
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:262)
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:157)
    at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:93)
    at io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:66)
    at io.grpc.internal.MessageFramer.writeKnownLength(MessageFramer.java:182)
    at io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:135)
    at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:125)
    at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:165)
    at io.grpc.internal.AbstractServerStream.writeMessage(AbstractServerStream.java:108)
    at io.grpc.internal.ServerImpl$ServerCallImpl.sendMessage(ServerImpl.java:496)
    at io.grpc.stub.ServerCalls$ResponseObserver.onNext(ServerCalls.java:241)
    at play.bench.BenchGRPCOOME$CounterImpl.inc(BenchGRPCOOME.java:150)
    at play.bench.CounterServerGrpc$1.invoke(CounterServerGrpc.java:171)
    at play.bench.CounterServerGrpc$1.invoke(CounterServerGrpc.java:166)
    at io.grpc.stub.ServerCalls$1$1.onHalfClose(ServerCalls.java:154)
    at io.grpc.internal.ServerImpl$ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerImpl.java:562)
    at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$2.run(ServerImpl.java:432)
    at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
4

1 回答 1

0

问题是 StreamObserver.onNext 没有阻塞,所以写多了也没有回推。有一个未解决的问题。需要有一种方法让您与流量控制进行交互,并被告知您应该降低发送速率。对于客户端,一种解决方法是直接使用 ClientCall;你调用 Channel.newCall all 然后注意 isReady 和 onReady。对于服务器端,没有简单的解决方法。

于 2016-02-27T23:52:30.530 回答