I wrote a simple tcp server application, where my fd_set for read includes the connection socket descriptor. The server application, simply sends an ACK, whenever it receives a message. The client application only sends the next message, after it receives an ACK from server.
// timeval == NULL
select(maxfd, &read_set, NULL, NULL, NULL)
When I do this, the performance is about 3K messages/sec. The latency between sending an ack and receiving a response from client is 0.3ms.
// tm.tv_sec=0 and tm.tv_usec=0
select(maxfd, &read_set, NULL, NULL, tm)
But if I do this, the performance goes to 8K messages/sec and latency drops to 0.18ms.
In the latter case, select becomes a poll. Can someone please explain why the latter case performs so much better than the first case?