I am writting a test program for a server. At the test app, I try to connect a great number of clients to the server, but after a while a get all kind of errors like these :
Connection reset by peer: socket write error
or
java.net.SocketException: Connection reset
or
java.net.ConnectException: Connection refused: connect
I use a new socket for every client I connect to the server.
Could someone enlighten me about this strange behaviour?
Unfortunately you haven’t provided much details of your server’s nature. I suppose you are writing a typical TCP server. In this answer I will not talk about any Java-specific details.
The short advice is: insert a delay between clients connections. Without it you are actively simulating a DoS attack to your server.
For the longer one, read below.
Usually a TCP server creates only 1 listening socked by calling (in lovely C interface)
int sockfd = socket(...)function, and passing the result (sockfdin our case) tobind()andlisten()functions. After this preparations, the server would call anaccept()which will steep the server in slumber (if the socket was marked as blocking) and if a client on the other side of the Earth will start calling aconnect()function, thanaccept()(on the server side) with the support of the OS kernel will create the connected socket.The actual number of possible pending connectins can be known by looking at the
listen()function.listen()has a backlog parameter which defines the maximum number of connection the OS kernel should queue to the socket (this is basically a sum of all connections inSYN_RCVDandESTABLISHEDstates). Historically the recommended value for backlog in 1980s was something like 5 which is obviously miserable in our days. In FreeBSD 7.2, for example, a hard limit for backlog may be guessed by typing:and in Fedora 10:
P.S.
Sorry for my terrible English.