开发者

Help me analyze dump file

开发者 https://www.devze.com 2023-03-16 09:21 出处:网络
Customers are reporting problems almost every day on about the same hours. This app is running on 2 nodes. It is Metastorm BPM platform and it\'s calling our code.

Customers are reporting problems almost every day on about the same hours. This app is running on 2 nodes. It is Metastorm BPM platform and it's calling our code.

In some dumps I noticed very long running threads (~50 minutes) but not in all of them. Administrators are also telling me that just before users report problems memory usage goes up. Then everything slows down to the point they can't work and admins have to restart platforms on both nodes. My first thought was deadlocks (long running threads) but didn't manage to confirm that. !syncblk isn't returning anything. Then I looked at memory usage. I noticed a lot of dynamic assemblies so thought maybe assemblies leak. But it looks it's not that. I have received dump from day where everything was working fine and number of dynamic assemblies is similar. So maybe memory leak I thought. But also cannot confirm that. !dumpheap -stat shows memory usage grows but I haven't found anything interesting with !gcroot. But there is one thing I don't know what it is. Threadpool Completion Port. There's a lot of them. So maybe sth is waiting on sth? Here is data I can give You so far that will fit in this post. Could You suggest anything that could help diagnose this situation?

Users not reporting problems:
                    Node1                       Node2
Size of dump:       638MB                       646MB
DynamicAssemblies   259                         265
GC Heaps:           37MB                        35MB                    
Loader Heaps:       11MB                        11MB

Node1:
Number of Timers: 12
CPU utilization 2%
Worker Thread: Total: 5 Running: 0 Idle: 5 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 2 Free: 2 MaxFree: 16 CurrentLimit: 4 MaxLimit: 1000 MinLimit: 8

!dumpheap -stat (biggest)
0x793041d0   32,664    2,563,292 System.Object[]
0x79332b9c   23,072    3,485,624 System.Int32[]
0x79330a00   46,823    3,530,664 System.String
0x79333470   22,549    4,049,536 System.Byte[]

Node2:
Number of Timers: 12
CPU utilization 0%
Worker Thread: Total: 7 Running: 0 Idle: 7 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 3 Free: 1 MaxFree: 16 CurrentLimit: 5 MaxLimit: 1000 MinLimit: 8

!dumpheap -stat
0x793041d0   30,678    2,537,272 System.Object[]
0x79332b9c   21,589    3,298,488 System.Int32[]
0x79333470   21,825    3,680,000 System.Byte[]
0x79330a00   46,938    5,446,576 System.String
-------------------------------------------------------------------------开发者_运维百科---------------------------------------------------------------------------------------------------

Users start to report problems:
                    Node1                      Node2
Size of dump:       662MB                       655MB
DynamicAssemblies   236                         235
GC Heaps:           159MB                       113MB                   
Loader Heaps:       10MB                        10MB

Node1:
Work Request in Queue: 0
Number of Timers: 14
CPU utilization 20%
Worker Thread: Total: 7 Running: 0 Idle: 7 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 48 Free: 1 MaxFree: 16 CurrentLimit: 49 MaxLimit: 1000 MinLimit: 8

!dumpheap -stat
0x7932a208   88,974    3,914,856 System.Threading.ReaderWriterLock
0x79333054   71,397    3,998,232 System.Collections.Hashtable
0x24f70350  319,053    5,104,848 Our.Class
0x79332b9c   53,190    6,821,588 System.Int32[]
0x79333470   52,693    6,883,120 System.Byte[]
0x79333150   72,900   11,081,328 System.Collections.Hashtable+bucket[]
0x793041d0  247,011   26,229,980 System.Object[]
0x79330a00  644,807   34,144,396 System.String

Node2:
Work Request in Queue: 1
Number of Timers: 17
CPU utilization 17%
Worker Thread: Total: 6 Running: 0 Idle: 6 MaxLimit: 2000 MinLimit: 200
Completion Port Thread:Total: 48 Free: 2 MaxFree: 16 CurrentLimit: 49 MaxLimit: 1000 MinLimit: 8

!dumpheap -stat
0x7932a208   76,425    3,362,700 System.Threading.ReaderWriterLock
0x79332b9c   42,417    5,695,492 System.Int32[]
0x79333150   41,172    6,451,368 System.Collections.Hashtable+bucket[]
0x79333470   44,052    6,792,004 System.Byte[]
0x793041d0  175,973   18,573,780 System.Object[]
0x79330a00  397,361   21,489,204 System.String

Edit: I downloaded debugdiag and let it analyze my dumps. Here is part of output:


The following threads in process_name name_of_dump.dmp are making a COM call to thread 193 within the same process which in turn is waiting on data to be returned from another server via WinSock.

 The call to WinSock originated from 0x0107b03b and is destined for port xxxx at IP address xxx.xxx.xxx.xxx


( 18 76 172 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 210 211 212 213 214 215 216 217 218 224 225 226 227 228 229 231 232 233 236 239 )

14,79% of threads blocked

And the recommendation is:

Several threads making calls to the same STA thread can cause a performance bottleneck due to serialization. Server side COM servers are recommended to be thread aware and follow MTA guidelines when multiple threads are sharing the same object instance.

I checked using windbg what thread 193 does. It is calling our code. Our code is calling some Metastorm engine code and it hangs on some remoting call. But !runaway shows it is hanging for 8 seconds. So not that long. So I checked what are those waiting threads. All except thread 18 are:

System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, System.Threading.NativeOverlapped*)
I could understand one, but why so many of them. Is it specific to business process modeling engine we're using or is it something typical? I guess it's taking threads that could be used by other clients and that's why the slowdown reported by users. Are those threads Completion Port Threads I asked about before? Can I do anything more to diagnose or did I found our code to be the cause?


From the looks of the output most of the memory is not on the .net heaps (only 35 MB out of ~650) so if you are looking at the .net heaps I think you are looking in the wrong place. The memory is probably either in assemblies or in native memory if you are using some native component for file transfers or similar. You would want to use Debug Diag to monitor that.

It is hard to say if you are leaking dynamic assemblies without looking at the pattern of growth so I would suggest for that that you look at perfmon and #current assemblies to see if it keeps growing over time, if it does then you would have to investigate that further by looking at what the dynamic assemblies are with !dda

0

精彩评论

暂无评论...
验证码 换一张
取 消