开发者

why would I get EADDRINUSE not from bind() but from listen()?

开发者 https://www.devze.com 2023-03-20 19:00 出处:网络
In a C++ Linux application I\'m calling socket(), bind() and listen(), to create a server socket. Usually if the application is started twice (with same server port), in the second process bind() will

In a C++ Linux application I'm calling socket(), bind() and listen(), to create a server socket. Usually if the application is started twice (with same server port), in the second process bind() will fail with EADDRINUSE error. However, now I have a case where bind() has apparently succeeded but the subsequent listen(开发者_JAVA百科) call has thrown the EADDRINUSE error...

This is probably a rare race condition, but I'd be still interested in what cases it could happen that the second bind() succeeds but the second listen() does not. Does anyone know more about such a case?

This is on 32-bit RHEL 5.3.


Not sure about Linux, but on Windows, if a wildcard IP (INADDR_ANY, etc) is specified when calling bind(), the underlying binding may be delayed until listen() or connect() is called, as the OS has a better chance of deciding at that time which network interface is best to use. bind() will not report an error in that situation.


setsockopt(.... SOL_SOCKET, SO_REUSEADDR, ...) should fix your problem.

See setsockopt(2) and socket(7)

(as to why the second bind actually succeeds, no idea... actually this should already fail too)

0

精彩评论

暂无评论...
验证码 换一张
取 消