We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site).
We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput.
And we would be able to act upon it becoming 0 (nothing gets send, users wait => no good).So, any info, link, pointer to Indy code are welcome...
Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-)
Note: it's Indy9/D2007 on Windows Server 2003 SP2.More gory details:
The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know 开发者_JS百科when that happen and have a way to do something (logging at least) in our code.So the core question is who sets the window size to 0 and where? Where to hook (in Indy?) to know when that condition occurs?
The window size in the TCP header is noramlly set by the TCP stack software to reflect the size of the buffer space available. If your server is sending packets with a window set to zero, it probably because the client is sending data faster than the application running on the server is reading it, and the buffers associated with the TCP connection are now full.
This is perfectly normal operation for the TCP protocol if the client sends data faster than the server can read it. The client should refrain from sending data until the server sends a non-zero window size (there's no point, as it would be discarded anyway).
This may or may not reflect a serious problem between client and server, but if the condition persists it probably means the application running on the server has stopped reading the received data (once it starts reading, this frees up buffer space for TCP, and the TCP stack will send a new non-zero window size).
A TCP header with a window size of zero indicates that the receiver's buffers are full. This is a normal condition for a faster writer than reader.
In reading your description, it's not clear if this is unexpected. What caused you to open a protocol analyzer?
Since you might be interested in a solution to your problem, too:
If you have some control on what's running on the server side (the one that sends the 0 window size messages): Did you consider using setsockopt() with SO_RCVBUF to significantly increase the size of the receive buffer of your socket?
In Indy, setsockopt() is a method of the TIdSocketHandle. You should apply it to all the TIdSocketHandle objects associated with your socket. And in Indy 9, those are located through property Bindings in your TIdTCPServer.
I suggest first using getsockopt() with SO_RCVBUF to see what the OS gives you as a default buffer size. Then significantly increase this, may be by successive trials, doubling the size every time. You might also want re-run a getsockopt() call after your setsockopt() to insure that your setsockopt was actually performed: There is usually an upper limit that the socket implementation sets to the buffer sizes. And in this case, there is usually an OS-dependent way to move that ceiling value up. But those are rather extreme cases, and you are not too likely to need this.
If you don't have control on the source code on the side that gets overflowed, just check to see if the software running there exposes some parameter to change that buffer size.
Good luck!
精彩评论