One of our customers experiences problem with our streaming application (win32). It seems like UDP (RTP) packets that should be sent by the application with some constant interval (say 20 ms) ar开发者_运维问答e actually sent with a greatly variable deltas (say 15ms - 25ms - 10ms - 30 ms). This is the only customer that experiences the problem so network card or other OS network related infrastructure is our primal suspect.
The question is what kind of network configuration may introduce such problem (AV?,QOS?)
And how can I measure the time between actually calling "send" function and the moment the packet was actually delivered to the network? Is there any tool available for it.
I suspect any network issue can cause this problem.
There's no concept of QoS (quality of service) with basic UDP (even to the point that you can lose packets, have duplicates etc.). Your network card has to queue up packets to write to the network, and so you can't guarantee deliveries since it's queueing up packets from different applications.
Routers can prioritise as well, and that will affect the regularity of these packets.
EDIT: You've pointed out the local NIC, so the above re. routers doesn't apply in this situation.
In short there's no reason at all to expect that the above is anything other than acceptable.
If you are saying that you are measuring this directly on the NIC of the computer actually generating the packets (i.e. so can discount all network influences) then a possible cause is the load on the computer itself.
If there are many applications running on the computer, especially interactive ones and ones with a strong user interaction bias (which tend to get priority from most schedulers), then you may find that your application creating the messages is simply finding it hard to compete for the time it needs.
Even if all your customer computers have the same software loaded, what applications they are actually running and what they are doing with them may have an influence.
Guys the problem was actually the timing functions of windows indeed it turn out that Sleep() may have resolution of more than 15 ms. unless you are programtically set it to one 1ms. So no relation what' so ever to NIC.
精彩评论