I'm writing a simple program that makes multiple connections to different servers for status check. All these connections are constructed on-demand; up to 10 connections can be created simultaneously. I don't like the idea of one-thread-per-socket, so I made all these client sockets Non-Blocking, and throw them into a select() pool.
It worked great, until my client complained that the waiting time is too long before they can get the error report when target servers stopped responding.
I've checked several topics in the forum. Some had suggested that one can use alarm() signal or set a timeout in the select() function call. But I'm dealing with multiple connections, instead of one. When a process wide timeout signal happens, I've no way to distinguish the timeout connection among all the other connections.
Is there anyway to change the s开发者_运维知识库ystem-default timeout duration ?
You can use the SO_RCVTIMEO and SO_SNDTIMEO socket options to set timeouts for any socket operations, like so:
struct timeval timeout;
timeout.tv_sec = 10;
timeout.tv_usec = 0;
if (setsockopt (sockfd, SOL_SOCKET, SO_RCVTIMEO, &timeout,
sizeof timeout) < 0)
error("setsockopt failed\n");
if (setsockopt (sockfd, SOL_SOCKET, SO_SNDTIMEO, &timeout,
sizeof timeout) < 0)
error("setsockopt failed\n");
Edit: from the setsockopt
man page:
SO_SNDTIMEO
is an option to set a timeout value for output operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for output operations to complete. If a send operation has blocked for this much time, it returns with a partial count or with the error EWOULDBLOCK if no data were sent. In the current implementation, this timer is restarted each time additional data are delivered to the protocol, implying that the limit applies to output portions ranging in size from the low-water mark to the high-water mark for output.
SO_RCVTIMEO
is an option to set a timeout value for input operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for input operations to complete. In the current implementation, this timer is restarted each time additional data are received by the protocol, and thus the limit is in effect an inactivity timer. If a receive operation has been blocked for this much time without receiving additional data, it returns with a short count or with the error EWOULDBLOCK if no data were received. The struct timeval parameter must represent a positive time interval; otherwise, setsockopt() returns with the error EDOM.
am not sure if I fully understand the issue, but guess it's related to the one I had, am using Qt with TCP socket communication, all non-blocking, both Windows and Linux..
wanted to get a quick notification when an already connected client failed or completely disappeared, and not waiting the default 900+ seconds until the disconnect signal got raised. The trick to get this working was to set the TCP_USER_TIMEOUT socket option of the SOL_TCP layer to the required value, given in milliseconds.
this is a comparably new option, pls see https://www.rfc-editor.org/rfc/rfc5482, but apparently it's working fine, tried it with WinXP, Win7/x64 and Kubuntu 12.04/x64, my choice of 10 s turned out to be a bit longer, but much better than anything else I've tried before ;-)
the only issue I came across was to find the proper includes, as apparently this isn't added to the standard socket includes (yet..), so finally I defined them myself as follows:
#ifdef WIN32
#include <winsock2.h>
#else
#include <sys/socket.h>
#endif
#ifndef SOL_TCP
#define SOL_TCP 6 // socket options TCP level
#endif
#ifndef TCP_USER_TIMEOUT
#define TCP_USER_TIMEOUT 18 // how long for loss retry before timeout [ms]
#endif
setting this socket option only works when the client is already connected, the lines of code look like:
int timeout = 10000; // user timeout in milliseconds [ms]
setsockopt (fd, SOL_TCP, TCP_USER_TIMEOUT, (char*) &timeout, sizeof (timeout));
and the failure of an initial connect is caught by a timer started when calling connect(), as there will be no signal of Qt for this, the connect signal will no be raised, as there will be no connection, and the disconnect signal will also not be raised, as there hasn't been a connection yet..
Can't you implement your own timeout system?
Keep a sorted list, or better yet a priority heap as Heath suggests, of timeout events. In your select or poll calls use the timeout value from the top of the timeout list. When that timeout arrives, do that action attached to that timeout.
That action could be closing a socket that hasn't connected yet.
connect
timeout has to be handled with a non-blocking socket (GNU LibC documentation on connect
). You get connect
to return immediately and then use select
to wait with a timeout for the connection to complete.
This is also explained here : Operation now in progress error on connect( function) error.
int wait_on_sock(int sock, long timeout, int r, int w)
{
struct timeval tv = {0,0};
fd_set fdset;
fd_set *rfds, *wfds;
int n, so_error;
unsigned so_len;
FD_ZERO (&fdset);
FD_SET (sock, &fdset);
tv.tv_sec = timeout;
tv.tv_usec = 0;
TRACES ("wait in progress tv={%ld,%ld} ...\n",
tv.tv_sec, tv.tv_usec);
if (r) rfds = &fdset; else rfds = NULL;
if (w) wfds = &fdset; else wfds = NULL;
TEMP_FAILURE_RETRY (n = select (sock+1, rfds, wfds, NULL, &tv));
switch (n) {
case 0:
ERROR ("wait timed out\n");
return -errno;
case -1:
ERROR_SYS ("error during wait\n");
return -errno;
default:
// select tell us that sock is ready, test it
so_len = sizeof(so_error);
so_error = 0;
getsockopt (sock, SOL_SOCKET, SO_ERROR, &so_error, &so_len);
if (so_error == 0)
return 0;
errno = so_error;
ERROR_SYS ("wait failed\n");
return -errno;
}
}
精彩评论