To minimize latency (I don't care about packet loss) I want the smallest possible receive buffer for UDP. However, when I set SO_RCVBUF to below 1000 (with s开发者_开发百科etsockopt), my program never receives any packets. The datagrams I am sending have 28 bytes of data, for a total on-wire packet size of 70 bytes, so why can I not receive anything if SO_RCVBUF is < 1000? And how do I change this, to allow a smaller buffer size?
Additionally, is it possible to set the buffer in terms of number of packets, rather than bytes? Or is there some way I can manually empty it?
Making the socket receive buffer smaller will not reduce latency.
Instead you need to dequeue all available packets each time. This can be accomplished with a non-blocking socket and edge-triggered epoll
or kqueue
- on "readable" event read until you get EWOULDBLOCK
.
As to why you don't get any input with small SO_RCVBUF
value, look here - http://vger.kernel.org/~davem/skb_sk.html, and here - http://lxr.linux.no/#linux+v2.6.37/include/net/sock.h#L621
Hope this helps.
This is most probably platform specific, so, what platform are you targeting?
If you're using Windows then I suggest that you use overlapped I/O and I/O Completion Ports, set the recv buffer to 0 and always have multiple pending RecvFrom()
calls.
This should a) remove the stack's ability to buffer datagrams when you don't have a RecvFrom()
pending and b) allow you to process some of the datagrams.
You then tune the number of overlapped operations that you have outstanding so that it's always a few more than you have cores to process the inbound datagrams and you should get what you want.
Your question doesn't make sense. Reducing the buffer size doesn't reduce latency. It just increases the probability that incoming datagrams will be dropped, which happens if there isn't room in the socket receive buffer. Your answer about two simultaneous incoming packets doesn't make sense either. Latency is a function of how fast you process the incoming data, not of how large the buffer is.
精彩评论