9

编辑

显然,我希望做的事情超出了节俭的范围......如果我确保端口上的客户端永远不会超过一个,那么一切都很好。当然,这种方式违背了目的,因为我希望向服务器打开几个可重用的连接以改善响应时间并降低开销。

如果有人对实现此目的的替代方法有建议,将不胜感激(或者如果我的结论是错误的)

背景

我有一个多组件应用程序,主要通过节俭连接(主要是 java->php 连接)。

到目前为止,一切似乎都很好,但是引入了一个 Java->Java 连接,其中客户端是一个每秒可以启动数百个请求的 servlet。

被访问的方法有如下接口:

bool pvCheck(1:i32 toolId) throws(1:DPNoToolException nte),

为了确保在服务端这不是什么奇怪的事情,我用一个简单的实现替换了实现:

    @Override
    public boolean pvCheck(int toolId) throws TException {
        //boolean ret = api.getViewsAndDec(toolId);
        return true;
    }

错误/可能的原因?

只要连接不多,它就可以正常运行,但是一旦连接靠近,连接就会开始卡在阅读器中。

如果我在调试器中拉出其中一个,堆栈看起来像这样:

Daemon Thread [http-8080-197] (Suspended)   
    BufferedInputStream.read(byte[], int, int) line: 308    
    TSocket(TIOStreamTransport).read(byte[], int, int) line: 126    
    TSocket(TTransport).readAll(byte[], int, int) line: 84  
    TBinaryProtocol.readAll(byte[], int, int) line: 314 
    TBinaryProtocol.readI32() line: 262 
    TBinaryProtocol.readMessageBegin() line: 192    
    DumboPayment$Client.recv_pvCheck() line: 120    
    DumboPayment$Client.pvCheck(int) line: 105  
    Receiver.performTask(HttpServletRequest, HttpServletResponse) line: 157 
    Receiver.doGet(HttpServletRequest, HttpServletResponse) line: 109   
    Receiver(HttpServlet).service(HttpServletRequest, HttpServletResponse) line: 617    
    Receiver(HttpServlet).service(ServletRequest, ServletResponse) line: 717    
    ApplicationFilterChain.internalDoFilter(ServletRequest, ServletResponse) line: 290  
    ApplicationFilterChain.doFilter(ServletRequest, ServletResponse) line: 206  
    StandardWrapperValve.invoke(Request, Response) line: 233    
    StandardContextValve.invoke(Request, Response) line: 191    
    StandardHostValve.invoke(Request, Response) line: 127   
    ErrorReportValve.invoke(Request, Response) line: 102    
    StandardEngineValve.invoke(Request, Response) line: 109 
    CoyoteAdapter.service(Request, Response) line: 298  
    Http11AprProcessor.process(long) line: 859  
    Http11AprProtocol$Http11ConnectionHandler.process(long) line: 579   
    AprEndpoint$Worker.run() line: 1555 
    Thread.run() line: 619  

这似乎是由数据损坏触发的,因为我得到以下异常:

10/11/22 18:38:55 WARN logger.Receiver: pvCheck had an exception
org.apache.thrift.TApplicationException: pvCheck failed: unknown result
    at *.thrift.generated.DumboPayment$Client.recv_pvCheck(DumboPayment.java:135)
    at *.thrift.generated.DumboPayment$Client.pvCheck(DumboPayment.java:105)
    at *.Receiver.performTask(Receiver.java:157)
    at *.Receiver.doGet(Receiver.java:109)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
    at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859)
    at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579)
    at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1555)
    at java.lang.Thread.run(Thread.java:619)

10/11/22 17:59:46 ERROR [/ninja_ar].[Receiver]: サーブレット Receiver のServlet.service()が例外を投げました
java.lang.OutOfMemoryError: Java heap space
    at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:296)
    at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:290)
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:198)
    at *.thrift.generated.DumboPayment$Client.recv_pvCheck(DumboPayment.java:120)
    at *.thrift.generated.DumboPayment$Client.pvCheck(DumboPayment.java:105)
    at *.Receiver.performTask(Receiver.java:157)
    at *.Receiver.doGet(Receiver.java:109)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:690)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:151)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:870)
    at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
    at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
    at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
    at java.lang.Thread.run(Thread.java:636)

也许我离题了,但我很确定这些与客户端在没有发送任何内容时继续尝试阅读有关。

一些实现细节

服务器和客户端都使用 java 二进制协议。

我写了一个简单的客户端池类,它可以让我重用客户端,这些是主要功能:

public synchronized Client getClient() {
    if(clientQueue.isEmpty()) {
        return newClient();
    } else {
        return clientQueue.getLast();
    }
}

private synchronized Client newClient() {
    int leftToTry = serverArr.length;
    Client cli = null;
    while(leftToTry > 0 && cli == null) {
        log.info("Creating new connection to " + 
                serverArr[roundRobinPos] + port);
        TTransport transport = new TSocket(serverArr[roundRobinPos], port);
        TProtocol protocol = new TBinaryProtocol(transport);
        cli = new Client(protocol);
        try {
            transport.open();
        } catch (TTransportException e) {
            cli = null;
            log.warn("Failed connection to " + 
                    serverArr[roundRobinPos] + port);
        }

        roundRobinPos++;
        if(roundRobinPos >= serverArr.length) {
            roundRobinPos = 0;
        }
        leftToTry--;
    }

    return cli;
}

public void returnClient(Client cli) {
    clientQueue.addFirst(cli);
}

客户端应用程序(即 tomcat servlet)通过以下方式访问它:

    Client dpayClient = null;
    if(dpay != null
            && (dpayClient = dpay.getClient()) != null) {

        try {
            dpayClient.pvCheck(requestParameters.getId());
        } catch (DPNoToolException e) {
            return;
        } catch (TException e) {
            log.warn("pvCheck had an exception", e);
        } finally {
            if(dpayClient != null) {
                dpay.returnClient(dpayClient);
            }
        }
    }

实际的节俭连接以下列方式增加

private boolean initThrift(int port, Configuration conf) {
    TProtocolFactory protocolFactory = new TBinaryProtocol.Factory();
    DPaymentHandler handler = new DPaymentHandler(conf);

    DumboPayment.Processor processor = 
        new DumboPayment.Processor(handler);

    InetAddress listenAddress;
    try {
        listenAddress = InetAddress.getLocalHost();
    } catch (UnknownHostException e) {
        LOG.error("Failed in thrift init", e);
        return false;
    }
    TServerTransport serverTransport;
    try {
        serverTransport = new TServerSocket(
                new InetSocketAddress(listenAddress, port));
    } catch (TTransportException e) {
        LOG.error("Failed in thrift init", e);
        return false;
    }

    TTransportFactory transportFactory = new TTransportFactory();
    TServer server = new TThreadPoolServer(processor, serverTransport,
            transportFactory, protocolFactory);

    LOG.info("Starting Dumbo Payment thrift server on " + 
            listenAddress + ":" + Integer.toString(port));
    server.serve();

    return true;
}

最后

已经坚持了一段时间了......很可能我错过了一些明显的东西。我真的很感激这方面的任何帮助。

如果需要任何其他信息,请告诉我。那里有一整口,所以我想尽量保持最(希望)相关的东西。

4

3 回答 3

11

我的猜测是你有多个线程试图同时使用客户端,我不完全确定防弹。您可能会尝试使用异步接口以及构建线程安全资源池来访问您的客户端。

使用 Thrift-0.5.0.0,下面是为您的 thrift 生成的代码创建 AsyncClient 的示例:

Factory fac = new AsyncClient.Factory(new TAsyncClientManager(), new TProtocolFactory() {
    @Override
    public TProtocol getProtocol( TTransport trans ) {
        return new TBinaryProtocol(trans);
    }
});
AsyncClient cl = fac.getAsyncClient( new TNonblockingSocket( "127.0.0.1", 12345 ));

但是,如果您查看源代码,您会注意到它有一个单线程消息处理程序,即使它使用 NIO 套接字,您也可能会发现这是一个瓶颈。要获得更多,您必须创建更多异步客户端,检查它们并正确返回它们。

为了简化这一点,我做了一个快速的小班来管理它们。你需要做的唯一一件事是修改它以满足你的需要是包含你的包,它应该对你有用,即使我没有真正测试过它(真的):

public class Thrift {

    // This is the request
    private static abstract class ThriftRequest {

        private void go( final Thrift thrift, final AsyncClient cli ) {
            on( cli );
            thrift.ret( cli );
        }

        public abstract void on( AsyncClient cli );
    }

    // Holds all of our Async Clients
    private final ConcurrentLinkedQueue<AsyncClient>   instances = new ConcurrentLinkedQueue<AsyncClient>();
    // Holds all of our postponed requests
    private final ConcurrentLinkedQueue<ThriftRequest> requests  = new ConcurrentLinkedQueue<ThriftRequest>();
    // Holds our executor, if any
    private Executor                                 exe       = null;

    /**
     * This factory runs in thread bounce mode, meaning that if you call it from 
     * many threads, execution bounces between calling threads depending on when        
     * execution is needed.
     */
    public Thrift(
            final int clients,
            final int clients_per_message_processing_thread,
            final String host,
            final int port ) throws IOException {

        // We only need one protocol factory
        TProtocolFactory proto_fac = new TProtocolFactory() {

            @Override
            public TProtocol getProtocol( final TTransport trans ) {
                return new TBinaryProtocol( trans );
            }
        };

        // Create our clients
        Factory fac = null;
        for ( int i = 0; i < clients; i++ ) {

            if ( fac == null || i % clients_per_message_processing_thread == 0 ) {
                fac = new AsyncClient.Factory(
                    new TAsyncClientManager(),
                    proto_fac );
            }

            instances.add( fac.getAsyncClient( new TNonblockingSocket(
                host,
                port ) ) );
        }
    }
    /**
     * This factory runs callbacks in whatever mode the executor is setup for,
     * not on calling threads.
     */
    public Thrift( Executor exe,
            final int clients,
            final int clients_per_message_processing_thread,
            final String host,
            final int port ) throws IOException {
        this( clients, clients_per_message_processing_thread, host, port );
        this.exe = exe;
    }

    // Call this to grab an instance
    public void
            req( final ThriftRequest req ) {
        final AsyncClient cli;
        synchronized ( instances ) {
            cli = instances.poll();
        }
        if ( cli != null ) {
            if ( exe != null ) {
                // Executor mode
                exe.execute( new Runnable() {

                    @Override
                    public void run() {
                        req.go( Thrift.this, cli );
                    }

                } );
            } else {
                // Thread bounce mode
                req.go( this, cli );
            }
            return;
        }
        // No clients immediately available
        requests.add( req );
    }

    private void ret( final AsyncClient cli ) {
        final ThriftRequest req;
        synchronized ( requests ) {
            req = requests.poll();
        }
        if ( req != null ) {
            if ( exe != null ) {
                // Executor mode
                exe.execute( new Runnable() {

                    @Override
                    public void run() {
                        req.go( Thrift.this, cli );
                    }
                } );
            } else {
                // Thread bounce mode
                req.go( this, cli );
            }
            return;
        }
        // We did not need this immediately, hold onto it
        instances.add( cli );

    }

}

如何使用它的一个例子:

// Make the pool
Thrift t = new Thrift( 10, "localhost", 8000 );
// Use the pool
t.req( new ThriftRequest() {

    @Override
    public void on( AsyncClient cli ) {
        cli.MyThriftMethod( "stringarg", 111, new AsyncMethodCallback<AsyncClient.MyThriftMethod_call>() {
            @Override
            public void onError( Throwable throwable ) {
                }

            @Override
            public void onComplete( MyThriftMethod_call response ) {
            }
        });
    }
} );

您可能想尝试不同的服务器模式,例如 THsHaServer,看看哪种模式最适合您的环境。

于 2011-01-20T03:55:47.553 回答
0

尝试在以下行中使用 THttpClient 而不是 TSocket:

TTransport transport = new TSocket(serverArr[roundRobinPos], port);
于 2013-04-19T01:27:27.253 回答
-1

ClientPool 的函数 returnClient 不是线程安全的:

public void returnClient(Client cli) {
    clientQueue.addFirst(cli);
}
于 2015-05-10T14:44:18.540 回答