I have an application that consists of numerous systems using UDP clients in remote locations. All clients send UDP packets to a central location for processing. In my application, it is critical that the central location knows what time th开发者_StackOverflow社区e packet was sent by the remote location.
From a design perspective, would it be "safe" to assume that the central location could timestamp the packets as they arrive and use that as the "sent time"? Since the app uses UDP, the packets should either arrive immediately or not arrive at all? The other option would be to set up some kind of time syncing on each remote location. The disadvantage to this is that then I would need to continually ensure that the time syncing is working on each of potentially hundreds of remote locations.
My question is whether timestamping the UDP packets at the central location to determine "sent time" is a potential flaw. Is it possible to experience any delay with UDP?
For seconds resolution you can use time stamping of when you receive the packet, but you still need to use a sequence number to block re-ordered or duplicate packets.
This can make your remote stations less complex as they won't need a battery backed clock or synchronisation techniques.
For millisecond resolution you would want to calculate the round trip time (RTT) and use that offset to the clock on the receiver.
Unless you are using the precision time protocol (PTP) in a controlled environment you can never trust the clock of remote hosts.
There is always a delay in transmission, and UDP packets do not have guaranteed delivery nor are they guaranteed to arrive in sequence.
Would need more information about the context to recommend a better soluion.
One option would be to require that the client clocks are synchronized with an external atomic clock. To ensure this, and to make your UDP more robust, the server can reject any packets that arrive "late" (as determained by the difference in the server's clock--also externally sync'd--and the packet timestamp).
If your server is acking packets, it can report to the client that it is (possibly) out of sync so that it can re-sync itself.
If your server is not acking packets, your whole scheme is probalby going to fail anyhow due to dropped or out of ordered packets.
精彩评论