I have a WCF service that is hosted in IIS6. The important part of the method looks like this:
public MyUser[] GetUsers(string appName, string[] names)
{
List<User> users = new List<User>();
foreach (string user in names)
{
MembershipUser mu =开发者_运维知识库 this.ADAMProvider.GetUser(user, false); //Unmanaged call to AzMan
if (mu != null)
{
users.Add(MyUser.CreateFrom(mu);
}
}
return users.ToArray();
}
The performance of this method is very poor when it is called with a large array of user names (over 100 or so). It can take over a minute to return. Also, if this method is called concurrently by more than one client it will time out. I have even seen it bring down the app pool. Please note that in the loop a call to AzMan is being made. AzMan is unmanaged COM component.
To increase performace I am considering a multi-threaded approach. .NET 4 is not an option so Parallel.For is not an option but doing the equivalant in 3.5 is.
My question is will creating a bunch of threads (then waiting for all before returning) actually increase performace? Is there an danger in doing this in an IIS6 hosted WCF service?
First, I have point out that the COM component could be a problem depending on its apartment state. Single-threaded apartment objects can only run in one thread. There is an automatic marshalling operation that occurs on STA objects that would effectively serialize all calls to it so no matter how hard you try you may not get any parallelization out of it. Even if it were an MTA object there might be a problem with the GetUser
method if it is not designed to be thread-safe.
But assuming none of this is a problem1 I would use the ThreadPool
instead of creating a bunch of threads to do this. Here is what it might look like.
public MyUser[] GetUsers(string appName, string[] names)
{
int count = 1; // Holds the number of pending work items.
var finished = new ManualResetEvent(false); // Used to wait for all work items to complete.
var users = new List<User>();
foreach (string user in names)
{
Interlocked.Increment(ref count); // Indicate that we have another work item.
ThreadPool.QueueUserWorkItem(
(state) =>
{
try
{
MembershipUser mu = this.ADAMProvider.GetUser(user, false);
if (mu != null)
{
lock (users)
{
users.Add(MyUser.CreateFrom(mu);
}
}
}
finally
{
// Signal the event if this is the last work item.
if (Interlocked.Decrement(ref count) == 0) finished.Set();
}
});
}
// Signal the event if this is the last work item.
if (Interlocked.Decrement(ref count) == 0) finished.Set();
// Wait for all work items to complete.
finished.WaitOne();
return users.ToArray();
}
One confusing thing about the pattern I used above is that it treats the main thread (the one queueing the work) as if it were another work item. That is why you see the check and signal code at the end of the loop. Without it there is an a very subtle race condition that could occur between the Set
and WaitOne
calls.
By the way, I should point out that the TPL is available in .NET 3.5 as part of the Reactive Extensions download.
1I suspect one of the two problems I mentioned will be in play in reality.
Normally, I'd say this may help - however, this would be of concern:
Also, if this method is called concurrently byu more than one client it will time out. I have even seen it bring down the app pool. Please note that in the loop a call to AzMan is bening made. AzMan is unmanaged COM component.
This sounds like the "AzMan" component is not thread safe. If that's the case, it will not be possible to multi-thread this routine effectively, as it is spending most of its time in that routine.
If, however, that routine is thread safe, and does not share state, multithreading might improve the performance. However, it depends on a lot of other issues, including the workload of the machine itself (if all cores are being fairly well utilized, it may not help), etc.
精彩评论