开发者

16 millisecond quantization when sending/receivingtcp packets

开发者 https://www.devze.com 2022-12-31 12:18 出处:网络
I have a C++ application runn开发者_C百科ing on windows xp 32 system sending and receiving short tcp/ip packets.

I have a C++ application runn开发者_C百科ing on windows xp 32 system sending and receiving short tcp/ip packets.

Measuring (accurately) the arrival time I see a quantization of the arrival time to 16 millisecond time units. (Meaning all packets arriving are at (16 )xN milliseconds separated from each other)

To avoid packet aggregation I tried to disable the NAGLE algorithm by setting the IPPROTO_TCP option to TCP_NODELAY in the socket variables but it did not help

I suspect that the problem is related to the windows schedular which also have a 16 millisecond clock.. Any idea of a solution to this problem ? Thanks


Use a higher resolution timer such as QueryPerformanceTimer() or __rdtsc() being aware of their downfalls.

Similarly note if you are using wait functions you may want to call timeBeginPeriod() for 1ms resolution or even implement a busy-way delay function wrapped around a higher resolution timer.

0

精彩评论

暂无评论...
验证码 换一张
取 消