开发者

.NET IOCP ThreadPool overhead with async UDP operations

开发者 https://www.devze.com 2023-03-08 05:13 出处:网络
I have developed a VoIP media server which exchanges RTP packets with remote SIP endpoints. It needs to scale well - and while I was initially concerned that my C# implementation would not come close

I have developed a VoIP media server which exchanges RTP packets with remote SIP endpoints. It needs to scale well - and while I was initially concerned that my C# implementation would not come close to the C++ version it repla开发者_C百科ces, I have used various profilers to hone the implementation and performance is pretty close.

I have elimitated most object allocations by creating pools of reusable objects, I am using ReceiveFromAsync and SendToAsync to send/receive datagrams, and I am using producer/consumer queues to pass RTP packets around the system. On a machine with 2 x 2.4GHz Xeon processors I can now handle about 1000 concurrent streams, each sending/receiving 50 packets per second. However, the iterative profile/tweak/profile has me hooked - and I am sure there is more efficiency in there somewhere!

The event that triggers processing is the Completed delegate being called on an SocketAsyncEventArgs - which in turn sends the RTP packets through the processing pipeline.

The remaining frustration is that there seems to be significant overhead in the IOCP threadpool. The profiler shows that only 72% of Inclusive Sample time is in 'my code' - the time before then appears to be threadpool overhead (stack frames below).

So, my questions are:

  1. Am I missing something in my understanding?
  2. Is it possible to reduce this overhead?
  3. Is it possible to replace the threadpool used by the async socket functions to use a custom, lightweight threadpool with less overhead?
100% MediaGateway

95.35% Thread::intermediateThreadProc(void *)

88.37% ThreadNative::SetDomainLocalStore(class Object *)

88.37% BindIoCompletionCallbackStub(unsigned long,unsigned long,struct _OVERLAPPED *)

86.05% BindIoCompletionCallbackStubEx(unsigned long,unsigned long,struct _OVERLAPPED *,int)

86.05% ManagedThreadBase::ThreadPool(struct ADID,void (*)(void *),void *)

86.05% CrstBase::Enter(void)

86.05% AppDomainStack::PushDomain(struct ADID)

86.05% Thread::ShouldChangeAbortToUnload(class Frame *,class Frame *)

86.05% AppDomainStack::ClearDomainStack(void)

83.72% ThreadPoolNative::CorWaitHandleCleanupNative(void *)

83.72% __CT??_R0PAVEEArgumentException@@@84

83.72% DispatchCallDebuggerWrapper(unsigned long *,unsigned long,unsigned long *,unsigned 
__int64,void *,unsigned __int64,unsigned int,unsigned char *,class ContextTransitionFrame *)

83.72% DispatchCallBody(unsigned long *,unsigned long,unsigned long *,unsigned __int64,void *,unsigned __int64,unsigned int,unsigned char *)

83.72% MethodDesc::EnsureActive(void)

81.40% _CallDescrWorker@20

81.40% System.Threading._IOCompletionCallback.PerformIOCompletionCallback(uint32,uint32,valuetype System.Threading.NativeOverlapped*)

76.74% System.Net.Sockets.SocketAsyncEventArgs.CompletionPortCallback(uint32,uint32,valuetype System.Threading.NativeOverlapped*)

76.74% System.Net.Sockets.SocketAsyncEventArgs.FinishOperationSuccess(valuetype System.Net.Sockets.SocketError,int32,valuetype System.Net.Sockets.SocketFlags)

74.42% System.Threading.ExecutionContext.Run(class System.Threading.ExecutionContext,class System.Threading.ContextCallback,object)

72.09% System.Net.Sockets.SocketAsyncEventArgs.ExecutionCallback(object)

72.09% System.Net.Sockets.SocketAsyncEventArgs.OnCompleted(class System.Net.Sockets.SocketAsyncEventArgs)


50,000 packet per second on Windows is pretty good, I would say that the hardware and operating system are more significant issues for scaling. Different network interfaces impose different limits, Intel Server NICs being predominantly high performance with good drivers cross platform, however Broadcom do not have a good record on Windows compared with Linux. The advanced core networking APIs of Windows are only enabled if the drivers support the features, and Broadcom have shown to be a company that only enable advanced features for newer hardware despite support in older devices from other operating systems.

I would start to investigate multiple NICs, for example with a quad-Intel Server NIC and use Windows advanced network APIs to bind one NIC to each processing core. In theory you could send 50,000 through one NIC and 50,000 through another.

http://msdn.microsoft.com/en-us/library/ff568337(v=VS.85).aspx

However it seems that you don't really have a baseline to measure the efficiency of the code against. I would expect to see comparison with servers running no VoIP payload, running on a TCP transport instead of UDP, and running on other operating systems to compare IP stack and API efficiency.


Just too add some info - I recently discovered there is a bug present in the IOCP Thread Pool that might influence your performance: see point 3 of the 'cause' section in http://support.microsoft.com/kb/2538826. It might be valid for your case.

0

精彩评论

暂无评论...
验证码 换一张
取 消