Disclosure: the code I'm working on is for university coursework.
Background: The task I'm trying to complete is to report on the effect of different threading techniques. To do this I have written several classes which respond to a request from a client using Java Sockets. The idea is to flood the server with requests and report on how different threading strategies cope with this. Each client will make 100 requests, and in each iteration we're increasing the number of clients by 50 until something breaks.
Problem: repeatably, and consistently, an exception occurs:
Caused by: java.net.NoRouteToHostException: Cannot assign requested address at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
This happens in several scenarios, including when both the client and server are running on localhost. Connections can be made successfully for a while, it's soon after trying to connect 150 clients that the exception is thrown.
My first thought was that it could be Linux's limit on open file descriptors (1024) but I don't think so. I also checked that any and all connections between the sockets are closed properly (i.e. within a correct finally
block).
I'm hesitant to post the code because I'm not sure which parts would be the most relevant, and don't want to have a huge listing of code in the question.
Has anyone come across this before? How can I avoid the NoRouteToHostException?
EDIT (further questions are italicised)
Some good answers so far which point to either the The Ephemeral Port Range or RFC 2780. Both of which would suggest that I have too many connections open. For both it appears the number of connections which need to be made to reach this limit suggest that at some point I'm not closing connections.
Having debugged both client and server, both have been observed to hit the method call myJava-Net-SocketInstance.close()
. This would suggest that connections are being closed (at least in the non-exceptional case). Is this a correct suggestion?
Also, is there an OS level wait required for ports to become available again? It would be a possibility to run the program a separate time for each 50+ clients if it would just require a short period (or optimistically, running a command) before running the next attempt.
EDIT v2.0
Having taken the good answers provided, I modified my code to use the method setReuseAddress(true) with every Socket connection made on the client. This did not have the desired effect, and I am still limited to 250-300 clients. After the program terminates, running the command netstat -a
shows that there is a lot of socket connections in the TIME_WAIT status.
My assumption was that if a socket 开发者_开发百科was in the TIME-WAIT
status, and had been set with the SO-REUSEADDR
option, any new sockets attempting to use that port would be able to - however, I am still receiving the NoRouteToHostException.
Is this correct? Is there anything else which can be done to solve this problem?
Have you tried setting:
echo "1" >/proc/sys/net/ipv4/tcp_tw_reuse
and/or
echo "1" >/proc/sys/net/ipv4/tcp_tw_recycle
These settings may make Linux re-use the TIME_WAIT sockets. Unfortunately I can't find any definitive documentation.
This may help:
The Ephemeral Port Range
Another important ramification of the ephemeral port range is that it limits the maximum number of connections from one machine to a specific service on a remote machine! The TCP/IP protocol uses the connection's 4-tuple to distinguish between connections, so if the ephemeral port range is only 4000 ports wide, that means that there can only be 4000 unique connections from a client machine to a remote service at one time.
So maybe you run out of available ports. To get the number of available ports, see
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 61000
The output is from my Ubuntu system, where I'd have 28,232 ports for client connections. Hence, your test would fail as soon as you have 280+ clients.
Cannot assign requested address is the error string for the EADDRNOTAVAIL error.
I suspect you are running out of source ports. There are 16,383 sockets in the dynamic range available for use as a source port (see RFC 2780). 150 clients * 100 connections = 15,000 ports - so you are probably hitting this limit.
If you're running out of source ports but aren't actually maintaining that many open connections, set the SO_REUSEADDR
socket option. This will enable you to reuse local ports that are still in the TIME_WAIT
state.
If you are closing 500 connection per second you will run out of sockets. If you are connecting to the same locations (web servers) that use keepalive you can implement connection pools, so you don't close and reopen sockets.
This will save cpu too.
Use of tcp_tw_recycle and tcp_tw_reuse can result in packets coming in from the previous connecction, that is why there is a 1 minute wait for the packets to clear.
For any other Java users that stumble across this question, I would recommend using connection pooling so connections are reused properly.
精彩评论