开发者

Can boost::asio only receive full UDP datagrams?

开发者 https://www.devze.com 2022-12-23 17:44 出处:网络
I am working on a UDP server built with boost::asio and I started from the tutorial customizing to my needs. When I call socket.receive_from(boost::asio::buffer(buf), remote, 0, error); it fills my bu

I am working on a UDP server built with boost::asio and I started from the tutorial customizing to my needs. When I call socket.receive_from(boost::asio::buffer(buf), remote, 0, error); it fills my buffer with data from the packet, but, if my understanding is correct, it drops any data that won't fit in the buffer. Subsequent calls to receive_from will receive the next datagram available, so it looks to me like there is some loss o开发者_JAVA技巧f data without even a notice. Am I understanding this the wrong way?

I tried reading over and over the boost::asio documentation, but I didn't manage to find clues as to how I am supposed to do this the right way. What I'd like to do is reading a certain amount of data so that I can process it; if reading an entire datagram is the only way, I can manage that, but then how can I be sure not to lose the data I am receiving? What buffer size should I use to be sure? Is there any way to tell that my buffer is too small and I'm losing information?

I have to assume that I may be receiving huge datagrams by design.


This is not specific to boost; it's just how datagram sockets work. You have to specify the buffer size, and if the packet doesn't fit into the buffer, then it will be truncated and there is no way to recover the lost information.

For example, the SNMP protocol specifies that:

An implementation of this protocol need not accept messages whose length exceeds 484 octets. However, it is recommended that implementations support larger datagrams whenever feasible.

In short: you have to take it into account when designing your communication protocol that datagrams may be lost, or they may be truncated beyond some specified size.


For IPv4, the datagram size field in the UDP header is 16 bits, giving a maximum size of 65,535 bytes; when you subtract 8 bytes for the header, you end up with a maximum of 65,527 bytes of data. (Note that this would require fragmentation of the enclosing IPv4 datagram regardless of the underlying interface MTU due to the 16-bit IPv4 packet/fragment length field.)

I just use a 64 KiB buffer because it's a nice round number.

You'll want to keep in mind that on the transmitting side you may need to explicitly enable fragmentation if you want to send datagrams larger than will fit in the interface MTU. From my Ubuntu 12.04 UDP(7) manpage:

   By default, Linux UDP does path MTU (Maximum Transmission Unit) discov‐
   ery.  This means the kernel will keep track of the MTU  to  a  specific
   target  IP  address and return EMSGSIZE when a UDP packet write exceeds
   it.  When this happens, the  application  should  decrease  the  packet
   size.   Path MTU discovery can be also turned off using the IP_MTU_DIS‐
   COVER socket option or the /proc/sys/net/ipv4/ip_no_pmtu_disc file; see
   ip(7)  for  details.   When  turned off, UDP will fragment outgoing UDP
   packets that exceed the interface MTU.  However, disabling  it  is  not
   recommended for performance and reliability reasons.


Use getsockopt with the SO_NREAD option.

From the Mac OS X manpage:

SO_NREAD returns the amount of data in the input buffer that is available to be received. For data-gram oriented sockets, SO_NREAD returns the size of the first packet -- this differs from the ioctl() command FIONREAD that returns the total amount of data available.

0

精彩评论

暂无评论...
验证码 换一张
取 消