开发者

How can I measure the execution time of a for loop?

开发者 https://www.devze.com 2022-12-11 09:28 出处:网络
I want to measure the execution time of for loops on various platforms like php, c, python, Java, javascript... How can i measure it?

I want to measure the execution time of for loops on various platforms like php, c, python, Java, javascript... How can i measure it?

I know these platforms so i am talking about these:

 for (i = 0; i < 1000000; i++)
 {

 }   

I don't want to measure anything within the loop.

Little bit modification:

@all Some of the friends of mine are saying compiler will optimize this code making this loop a useless loop. I agree with this. we can add a small statement like some incremental statement, but the fact is I just want to calculate the execution time of per iteration in a loop in variou开发者_JAVA技巧s languages. By adding a incremental statement will add up the execution time and that will effect the results, cause on various platforms, execution time for incrementing a value also differ and that will make a result useless. In short, in better way I should ask:

I WANT TO CALCULATE THE EXECUTION TIME OF PER ITERATION IN A LOOP on Various PLATFORMS..HOW CAN DO THIS???

edit---

I came to know about Python Profilers Profiler modules ...which evaluate cpu time... absolute time.. Any suggestions???Meanwhile i am working on this...


Although an answer has been given for C++, it looks from your description ("[You] don't want to measure anything within the loop") like you're trying to measure the time which it takes a program to iterate over an empty loop.

Please take care here: not only will it take varying times from different platforms and processors, but many compilers will optimise away such loops, effectively rendering the answer as "0" for any loop size.


Note that it also depends on what exactly you want to achieve: do you care about the time your program waits due to it being preempted by the system scheduler? All the solutions above take actual time elapsed into consideration, but that also involves the time when other processes run instead of your own.

If you don't care about this, all of the solutions above are good. If you do care, you probably need some profiling software to actually see how long the loop takes.

I'd start with a program that does nothing but your loop and (in a linux environment at least) do time you-prg-executable.

Then I'd investigate if there are tools which work like time. Not sure, but I'd look at JRat for java, and gcc's gcov for C and C++. No doubt there are similar tools for the other languages. But, of course, you need to see if they give actual time or not.


javascript

start = new Date;
for(var i = 0; i < 1000000; i++) {}
time = new Date - start;


The right way to do it in python is to run timeit from the command line:

$ python -m timeit "for i in xrange(100): pass"
100000 loops, best of 3: 2.5 usec per loop


Other version in PHP that doesn't require any extra stuff:

$start = microtime(true);

for (...) {
   ....
}

$end = microtime(true);

echo ($end - $start).' seconds';


For compiled languages such as C and C++, make sure that your compiler flags are set such that the loop isn't optimized away. With optimization switched on I would expect most compilers to detect that nothing is going on in the loop and optimize it away.


If you're using Python, you can use a module specifically built for timing things. It's called Timeit.

Here are a couple of references I found (just Googled it):

  1. Dive Into Python: Using the Timeit Module
  2. Python Documentation: Timeit Module

And here's some example code to get you started quickly:

import timeit
t = timeit.Timer("for i in range(100): pass", "")
# Timeit will run the statement 1,000,000 times by default, and return the time it took for all the runs together (it doesn't try to average them out or anything).
t.timeit()
2.9035916423318398 # This is the result. Don't forget (like I did in an earlier edit) that this is the result of running the code 1,000,000 times!


in php: (code timer)

$timer = new timer();

$timer->start();

 for(i=0;i<1000000;i++)
    {

    }

$timer->stop();

echo $timer->getTime(); 


I asked the same question a while back that was specifically for the c++ language. Heres the answer I ended up using:

#include <omp.h>

// Starting the time measurement
double start = omp_get_wtime();
// Computations to be measured
...
// Measuring the elapsed time
double end = omp_get_wtime();
// Time calculation (in seconds)


The result wont make sense in an empty loop, sine most compilers will optimize it at compiling time, before the runtime.

To compare languages speed, you will need a real algorithm, like "Merge Sort", "Binary Search" or maybe "Dijkstra" if you want something complicated. Implement the same algorithm in all languages then compare.

Here is a benchmark on a Bio-Informatics algorithm. link text Check the Results page


To the point: just get the current time before doing something (this is the start time) and get the current time after doing something (this is the end time) and then just do the primary school math to get the elapsed time. Every API provides ways to get the current time. In Java for example it's System.currentTimeMillis() and System.nanoTime().

But: especially in Java, the elapsed time isn't always that reliable. There can be microdifferences and it also depends much on how you do the tests. I've seen circumstances where in test2() is faster than test1() because it is executed a bit later and that it become slower when you rearrange the execution to test2() and then test1().

Last but not least, micro-optimization is root of all evil.


For Java, both Apache Commons Lang and the Spring Framework have StopWatch (see the Java doc for Apache's here) classes that you can use as a way to measure execution time. Under the covers though it's just subtracting System.currentTimeMillis() and it doesn't save you that much code to use this utility.

0

精彩评论

暂无评论...
验证码 换一张
取 消