开发者

Timing a set of methods - the second time they are run, they are quicker

开发者 https://www.devze.com 2022-12-13 10:09 出处:网络
I have an algorithm of which I\'m using System.Diagonstics to time - via the Stopwatch. It works great but one thing I have noticed is that the first time I run the algorithm it takes around 52 milli

I have an algorithm of which I'm using System.Diagonstics to time - via the Stopwatch.

It works great but one thing I have noticed is that the first time I run the algorithm it takes around 52 milliseconds which is great.

The second time I run the algorithm it takes only a fraction of that time.

Is this due to the nature of .NET?

Each time I run the algorithm with a new set of data I re-initalise it. In other words I create a new object rather than re-use the old reference so I'm not sure why this still occurs. Normally I wouldn't care about something like this, but for this assignment I must measure the efficency and speed of my algorithms so it is important for myself to get an understanding to why this is happening.

Pseudo code of how I'm using the timer is below:

 Algorithm class

 Stopwatch get/set

 Method A
     Start stopwatch
     // Do work.
     Stop stopwatch
 End

 Method B
     Start stopwatch
     // Do work.
     Stop stopwatch
 End

End

After both methods are called in my runner, I get the stopwatch and inspect the time.

The algorithm

The algorithm is tactical waypoint reasoning for computer controlled A.I opponnents. I tried to keep it as simple as possible in the above example.

Results

19.7847
0.0443
0.0102
0.0159
0.0091
0.0073
0.0079
0.0079
0.0079
0.0079
0.0079
0.0079
0.0136
0.0079
0.0073
0.0079
开发者_运维技巧0.0079
0.0079
0.0079
0.0073
...

Should I just ignore the first time the algorithm is run? Otherwise I'll end up with an average that is essentially the same as the value when its first run.


If you're only timing for 52 milliseconds, any number of things could be happening - that's a very small amount of time to measure.

It could well be that it's due to JIT compilation of the method and everything it touches, for example.

In general, to get useful measurements you should time multiple iterations to get a longer period - this reduces the noise due to (for example) some other event in your operating system taking the CPU away briefly.


Repeat your tests thousands of times in a loop to get an average. You should try not to allocate and deallocate objects when you do this, so you reduce the possibility of a garbage collection.


The first time it runs, the CLR bytecode must be JITed, which incurs an overhead. Subsequent executions do not incur this cost.

0

精彩评论

暂无评论...
验证码 换一张
取 消