6

我正在尝试选择并行发出大量 http 请求的最佳方法。以下是我到目前为止的两种方法:

  1. 使用 Apache HttpAsyncClient 和 CompletableFutures:

    try (CloseableHttpAsyncClient httpclient = HttpAsyncClients.custom()
    .setMaxConnPerRoute(2000).setMaxConnTotal(2000)
    .setUserAgent("Mozilla/4.0")
    .build()) {
    httpclient.start();
    HttpGet request = new HttpGet("http://bing.com/");
    long start = System.currentTimeMillis();
    CompletableFuture.allOf(
            Stream.generate(()->request).limit(1000).map(req -> {
                CompletableFuture<Void> future = new CompletableFuture<>();
                httpclient.execute(req, new FutureCallback<HttpResponse>() {
                    @Override
                    public void completed(final HttpResponse response) {
                        System.out.println("Completed with: " + response.getStatusLine().getStatusCode())
                        future.complete(null);
                    }
                    ...
                });
                System.out.println("Started request");
                return future;
    }).toArray(CompletableFuture[]::new)).get();
    
  2. 传统的按请求线程方法:

    long start1 = System.currentTimeMillis();
    URL url = new URL("http://bing.com/");
    ExecutorService executor = Executors.newCachedThreadPool();
    
    Stream.generate(()->url).limit(1000).forEach(requestUrl ->{
        executor.submit(()->{
            try {
                URLConnection conn = requestUrl.openConnection();
                System.out.println("Completed with: " + conn.getResponseCode());
            } catch (IOException e) {
                e.printStackTrace();
            }
        });
        System.out.println("Started request");
    });
    

在多次运行中,我注意到传统方法的完成速度几乎是异步/未来方法的两倍

尽管我希望专用线程运行得更快,但差异是否应该如此显着,或者异步实现可能有问题?如果没有,这里的正确方法是什么?

4

1 回答 1

5

到位的问题取决于许多因素:

  • 硬件
  • 操作系统(及其配置)
  • JVM实现
  • 网络设备
  • 服务器行为

第一个问题 - 差异应该如此显着吗?

取决于负载、池大小和网络,但它可能比每个方向上观察到的因子 2 多得多(有利于异步或线程解决方案)。根据您后来的评论,差异更多是因为不当行为,但为了争论,我将解释可能的情况。

Dedicated threads could be quite a burden. (Interrupt handling and thread scheduling is done by the operating system in case you are are using Oracle [HotSpot] JVM as these tasks are delegated.) The OS/system could become unresponsive if there are too many threads and thus slowing your batch processing (or other tasks). There are a lot of administrative tasks regarding thread management this is why thread (and connection) pooling is a thing. Although a good operating system should be able to handle a few thousand concurrent threads, there is always the chance that some limits or (kernel) event occur.

This is where pooling and async behaviour comes in handy. There is for example a pool of 10 phisical threads doing all the work. If something is blocked (waits for the server response in this case) it gets in the "Blocked" state (see image) and the following task gets the phisical thread to do some work. When a thread is notified (data arrived) it becomes "Runnable" (from which point the pooling mechanism is able to pick it up [this could be the OS or JVM implemented solution]). For further reading on the thread states I recommend W3Rescue. To understand the thread pooling better I recommend this baeldung article.

Thread transitions

Second question - is something wrong with the async implementation? If not, what is the right approach to go about here?

The implementation is OK, there is no problem with it. The behaviour is just different from the threaded way. The main question in these cases are mostly what the SLA-s (service level agreements) are. If you are the only "customer of the service, then basically you have to decide between latency or throughput, but the decision will affect only you. Mostly this is not the case, so I would recommend some kind of pooling which is supported by the library you are using.

Third question - However I just noted that the time taken is roughly the same the moment you read the response stream as a string. I wonder why this is?

The message is most likely arrived completely in both cases (probably the response is not a stream just a few http package), but if you are reading the header only that does not need the response itself to be parsed and loaded on the CPU registers, thus reducing the latency of reading the actual data received. I think this is a cool represantation in latencies (source and source): Reach times

This came out as a quite long answer so TL.DR.: scaling is a really hardcore topic, it depends on a lot of things:

  • hardware: number of phisical cores, multi-threading capacity, memory speed, network interface
  • operating system (and its configuration): thread management, interruption handling
  • JVM 实现:线程管理(内部或外包给 OS),更不用说 GC 和 JIT 配置
  • 网络设备:一些限制来自给定 IP 的并发连接,一些池非HTTPS连接并充当代理
  • 服务器行为:池化工作人员或按请求工作人员等

在您的情况下,服务器很可能是瓶颈,因为两种方法在更正的情况下给出了相同的结果(HttpResponse::getStatusLine().getStatusCode() and HttpURLConnection::getResponseCode())。要给出正确的答案,您应该使用JMeterLoadRunner等工具测量您的服务器性能,然后相应地调整您的解决方案。这篇文章更多的是关于 DB 连接池,但这里的逻辑也适用。

于 2018-12-10T20:43:19.123 回答