1

当有许多套接字被打开时,我似乎误解了 Windows 如何在 TIME_WAIT 中处理套接字。如果太多人在 TIME_WAIT 中闲逛,那只是错误。Linux 清理旧的连接并成功(至少在我的机器上,不确定记录在哪里)。

我正在尝试编写一个基于协程的回显服务器,但它的行为似乎有点随机。我显然错过了一些东西。谁能告诉我我的方法是否错误,如果是,我错过了什么?

更新:在 linux (Ubuntu 14.04/gcc 4.8/boost 1.56) 上对此进行了测试,一切似乎都很好。Client Exception: Only one usage of each socket address (protocol/network address/port) is normally permitted在 Windows 上,客户端有时会开始抛出异常 ( )。其他时候运行良好。

更新二:我认为我的代码有问题,但似乎更多的是我对 Windows 网络的理解的问题。看来,当套接字在 Windows 上处于 TIME_WAIT 时,即使请求的新套接字超过某个限制,它也会保持不变,在 Linux 上,如果有太多的挂起,当请求新套接字时它们会被丢弃。

#include <iostream>
#include <thread>
#include <memory>
#include <atomic>

#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>

using namespace boost::asio::ip;
using namespace boost::asio;
using std::cout;
using std::endl;

std::atomic<int> active;

void echo_server(boost::asio::io_service &svc, int c)
{
    spawn(svc, [&svc, c](yield_context y) {
        try
        {
            tcp::acceptor acceptor(svc, tcp::endpoint(tcp::v4(), 6789));
            for (int i = 0; i < c; i++)
            {
                std::shared_ptr<tcp::socket> s = std::make_shared<tcp::socket>(svc);
                acceptor.async_accept(*s, y);

                spawn(y, [s](yield_context y) mutable {
                    try
                    {
                        streambuf buf;
                        for (;;)
                        {
                            async_read(*s, buf, transfer_at_least(1), y);
                            async_write(*s, buf, y);
                        }
                    }
                    catch (boost::system::system_error &e)
                    {
                        if (e.code() != error::eof)
                            cout << e.what() << endl;
                    }
                });
            }
            cout << "Server Done\n";
        }
        catch (std::exception &e)
        {
            std::cerr << "Server Exception: " << e.what() << std::endl;
        }
    });
}

void echo_client(boost::asio::io_service &svc)
{
    spawn(svc, [&svc](yield_context y) {
        tcp::socket sock(svc);
        try
        {
            async_connect(sock, tcp::resolver(svc).async_resolve(
                tcp::resolver::query("localhost", "6789"), y), y);

            char data[128];
            memcpy(data, "test", 5);
            async_write(sock, buffer(data), y);
            memset(data, 0, sizeof(data));
            async_read(sock, buffer(data), transfer_at_least(1), y);
            if (strcmp("test", data))
                cout << "Error, server is broken\n";

        }
        catch (std::exception &e)
        {
            std::cerr << "Client Exception: " << e.what() << std::endl;
        }

        active--;
    });
}

int main(int argc, char **argv)
{
    io_service svc;
    int c = 10000;
    active = 0;

    // Schedule a server
    echo_server(svc, c);

    // Schedule a bunch of clients, run only 1000 at a time.
    while (c > 0)
    {
        cout << "Remain " << c << endl;
        for (int i = 0; i < 1000; i++)
        {
            echo_client(svc);
        }
        active = 1000;
        while (active > 0)
        {
            if (svc.run_one() == 0) 
            {
                cout << "IO Service reset\n";
                svc.reset();
            }
        }
        c -= 1000;
    }
    svc.run();

    return 0;
}
4

1 回答 1

0

我认为您的驱动程序中有一些无法解释的事情。特别是,您重置服务是为了什么?尝试:

int main()
{
    boost::asio::io_service svc;
    int c = 100;

    // Schedule a server
    echo_server(svc, c);

    total_clients = 0;

    // Schedule a bunch of clients
    for (int i = 0; i < c; i++)
    {
        echo_client(svc);
    }

    std::cout << "total_clients: " << total_clients << "\n";
    svc.run();
}

为我工作:

total_clients: 100
Done handling new connections
于 2014-08-26T08:33:13.143 回答