I’m programming a C/C++ client/server sockets application. At this point, the client connects itselfs to the server every 50ms and sends a message.
Everything seems to works, but the data flow is not continuous: Suddenly, the server doesn’t receives anything more, and then 5 messages at once… And sometimes everything works…
Has someone an idea of the origin of this strange behaviour ?
Some part of the code:
Client:
while (true)
{
if (SDL_GetTicks()-time>=50)
{
socket = new socket();
socket->write("blah");
message.clear();
message = socket->read();
socket->close();
delete socket;
time=SDL_GetTicks();
}
}
Server:
while (true) {
fd_set readfs;
struct timeval timeout={0,0};
FD_ZERO(&readfs);
FD_SET(sock, &readfs);
select(sock + 1, &readfs, NULL, NULL, &timeout)
if(FD_ISSET(sock, &readfs))
{
SOCKADDR_IN csin;
socklen_t crecsize = sizeof csin;
SOCKET csock = accept(sock, (SOCKADDR *) &csin, &crecsize);
sock_err = send(csock, buffer, 32, 0);
closesocket(csock);
}
}
Edits:
1. I tried to do
int flag = 1;
setsockopt(socket, IPPROTO_TCP, TCP_NODELAY, &flag, sizeof flag);
In both client and server, but the problem is still there.
2.Yes those connections/deconnections are very inneficient, but when I try to write
socket = new socket();
while (true)
{
if (SDL_GetTicks()-time>=50)
{
socket->write("blah");
message.clear();
message = socket->read();
time=SDL_GetTicks();
}
}
Then the message is only sent once (or received)…
Finally:
I had forgotten to apply TCP_NODELAY to the client socket on the server side. Now it works perfectly !
I put the processes in threads so that the sockets keep open.
Thank you all 🙂
This is what called “Nagle delay“. This algorithm is waiting on TCP stack for more data to arrive before actually sending anything to network untill some timeout expires. So you should modify the Nagle timeout (http://fourier.su/index.php?topic=249.0) or disable Nagle delay at all (http://www.unixguide.net/network/socketfaq/2.16.shtml), so data will be sent per
sendcall.