Some code I just wrote follows.
It demonstrates applying a PostSharp aspect to a method for the purposes of recording the duration of the method invocation in an asynchronous manner - so that if the logging process is slow then this performance penalty is not seen by the caller of the method decorated with the aspect.
It seems to work, with MyFirstMethod completing, the logging method being set off in a separate thread, and MySecondMethod running on in parallel. The idea is that meth开发者_运维知识库ods within a very highly-trafficked web application (i.e. a highly multi-threaded environment) be decorated with similar instrumentation.
What are the pitfalls of doing so? (e.g. I am concerned about reaching a limit on the number of threads permitted at any given time).
using System;
using System.Threading.Tasks;
using NUnit.Framework;
using PostSharp.Aspects;
namespace Test
{
[TestFixture]
public class TestClass
{
[Test]
public void MyTest()
{
MyFirstMethod();
MySecondMethod();
}
[PerformanceInstrument]
private void MyFirstMethod()
{
//do nothing
}
private void MySecondMethod()
{
for (int x = 0; x < 9999999; x++);
}
}
[Serializable]
public class PerformanceInstrument : MethodInterceptionAspect
{
public override void OnInvoke(MethodInterceptionArgs args)
{
var startDtg = DateTime.Now;
args.Proceed();
var duration = DateTime.Now - startDtg;
Task.Factory.StartNew(() => MyLogger.MyLoggingMethod(duration)); //invoke the logging method asynchronously
}
}
public static class MyLogger
{
public static void MyLoggingMethod(TimeSpan duration)
{
for (int x = 0; x < 9999999; x++);
Console.WriteLine(duration);
}
}
}
The only possible downside that I see here is the overhead of managing the Tasks, which I am sure is probably insignificant, but I have not delved deaply into the TPL stuff to be certain.
An alternative approach that I have used in a large scale web application is to have the logging write the log message records to an in memory list, then I have a backgound thread that is responsible for writing the log messages out in the background. Currently the solution has the thread check the list every so often and flush the list to disk (in our case database) if the list length has exceeded a certain threashold or the list has not been flush for longer than a specific amount of time, which ever comes first.
This is something like the Producer/Consumer pattern, where you code produces log messages and the consumer is reponsible for flushing those messages to the persistence medium.
There may be unintended consequences of your approach because the ASP.NET engine and the Task Parallel Library are both scheduling tasks on the .NET thread pool. Each web request is serviced by a thread from the thread pool. If you as scheduling Tasks to handle logging then you will be using tasks on the thread pool which can no longer be used to service web requests. This may reduce throughput.
The TPL team blogged about this here.
http://blogs.msdn.com/b/pfxteam/archive/2010/02/08/9960003.aspx
The producer/consumer pattern would mean that your MethodInterceptionAspect would simply add an entry into a global queue (as suggested by Ben) and then you would have a single (long running) Task which processed all the entries. So your intercaption method becomes:
ConcurrentQueue<TimeSpan> _queue;
public override void OnInvoke(MethodInterceptionArgs args)
{
var startDtg = DateTime.Now;
args.Proceed();
var duration = DateTime.Now - startDtg;
Task.Factory.StartNew(() => _queue.Add(duration));
}
Somewhere else you process the queue:
foreach (var d in _queue.GetConsumingEnumerable())
{
Console.WriteLine(d);
}
The following post shows a similar implementation where multiple tasks created by a Parallel.For loop add images to a BlockingCollection and a single task processes the images.
Parallel Task Library WaitAny Design
How well this works depends a bit on the length of your request processing, the number of log entries your want to process per request and the overall server load etc. One thing you have to be aware of is that overall you need to be able to remove requests from the queue faster than they are being added.
Have you thought about an approach where you write your own performance counter and have the perf counter infrastructure handle the heavy lifting for you? This would save you needing to implement any of this recording infrastructure.
精彩评论