开发者

Socket.AsyncSend behavior when remote client stops receiving

开发者 https://www.devze.com 2023-02-14 21:30 出处:网络
I\'m running into an issue where I have a lot of AsyncSends going on with dozens of client sockets, and at the moment if any remote client stops receiving but doesn\'t actually disconnect, the app q开

I'm running into an issue where I have a lot of AsyncSends going on with dozens of client sockets, and at the moment if any remote client stops receiving but doesn't actually disconnect, the app q开发者_JAVA技巧uickly eat sup all of the SocketAsyncEventArgs (aka an sae) I have preallocated because they're not released since the SendAsync is not completing. The obvious solution to this is to implement a per-client send queue, which sounds easy enough but I am unclear as to the specifics. I have one sae allocated per client to receive, which works perfect, and ideally I'd love to allocate a single sae to the client's async sends.

I understand popping in and asking a question that is veiled request for a 'give me code' solution is looked down upon, but I honestly have not been able to turn up a lot, either on the .XAsync methods, which Microsoft has a horribly poor example for, nor for a send queue in general.

Edit for @J.N.

I forgot to mention it, but I -am- actually using a bunch of preallocated sae's stored in a managing class that internally uses a ConcurrentBag. With one connection, in a test scenario where 20 small messages are sent per second (this is slightly more than double how many messages would actually be sent in the production server), the server eats up 500 preallocated sae's in a few seconds. If I implement 'create if empty' code, the sae count quickly climbs into the thousands.

I understand the functionality that the async methods of the Socket class provides, but if there's a network hiccup or something similar, this issue will outright bomb the server within seconds. A send queue sounds like an great solution, but I have no idea whether it's a good idea to do in a high performance production environment or not.

The base problem remains, though, that with a dozen messages per second per client being sent out, any network congestion or other factor that lags the connection but does not disconnect it will quickly deteriorate into a laggy mess.

I think a solution I might be looking for is something that locks up the single sae being used for sending until the completion callback is reached, where that sae is then unlocked and then reused to send any pending data. The actual implementation of it eludes me though.


Before you start reading on, make sure you have played with and understood Socket.SendTimeout.

I'd suggest two different approaches:

  • First, you can try with a pool of sae. The best class for that is a ConcurrentBag (I guess you need thread safety). We did it for a buffer pool in my company, we did a small wrapper around ConcurentBag that creates new object if by chance the pool is empty. In your use case, you should consider the client disconnected. Of course you need to have one such pool for each connection so I'd suggest using a dictionary indexed on the socket. It's a bit ugly but it works.

  • Second, you can implement a send queue of your own, but you'll be duplicating the async functionality in .Net (and probably make it worse). You need your own thread and a queue (a synchronized one) of messages to be sent. When you do a send you need to push your message to the queue. The background send will periodically check the state of the queue and if not empty pop a few messages and send them synchronously. I'd recommend to use an AutoResetEvent each time you add something in the queue to wake up the background thread.

0

精彩评论

暂无评论...
验证码 换一张
取 消