I want to calculate how much CPU time my function takes to execute in Java. Currently I am doing as below.
long startTime = System.currentTimeMillis();
myfunction();
long endTime = System.currentTimeMillis();
long searchTime = endTime - startTime;
But 开发者_StackOverflow中文版I found out that for the same I/P I get different time depending on system load.
So, how to get exact CPU time my function took to execute.
System.currentTimeMillis()
will only ever measure wall-clock time, never CPU time.- If you need wall-clock time, then
System.nanoTime()
is often more precise (and never worse) thancurrentTimeMillis()
. ThreadMXBean.getThreadCPUTime()
can help you find out how much CPU time a given thread has used. UseManagementFactory.getThreadMXBean()
to get aThreadMXBean
andThread.getId()
to find theid
of the thread you're interested in. Note that this method need not be supported on every JVM!
As the JVM warms up the amount of time taken will vary. The second time you run this will always be faster than the first. (The first time it has to load classes and call static blocks) After you have run the method 10,000 times it will be faster again (The default threshold at which it compiles code to native machine code)
To get a reproducable average timing for a micro-benchmark, I suggest you ignore the first 10,000 iterations and run it for 2-10 seconds after that.
e.g.
long start = 0;
int runs = 10000; // enough to run for 2-10 seconds.
for(int i=-10000;i<runs;i++) {
if(i == 0) start = System.nanoTime();
// do test
}
long time = System.nanoTime() - start;
System.out.printf("Each XXXXX took an average of %,d ns%n", time/runs);
Very important: Only do one of these loops per method. This is because it optimises the whole method based on how it is used. If you have one busy loop like this the later loops will appear slower because they have not run and will be optimised poorly.
The proper way to do Microbenchmarks is to learn about, and use correctly, the Java micobenchmark harness (JMH), which is augmented by the JEP 230 Microbenchmark Suite from OpenJDK 12 onward. A search for "java jmh" will yield links to some useful tutorials. I liked Jakob Jenkov's blog post, and of course anything by Aleksey Shipilëv, who is the principal developer and maintainer of JMH. Just pick the most current version of his JMH talks on the link provided.
Java benchmarking is everything but trivial, and the less work your tested code does, the deeper the rabbit hole. Timestamping can be very misleading when trying to get a grip on performance issues. The one place where timestamping does work is when you try to measure wait time for external events (such as waiting for a reply to a HTTP request and these kinds of things), as long as you can ensure that there is negligible time spent between the unblocking of a waiting thread and the taking of the "after" timestamp, and as long as the thread is unblocked duly in the first place. This is typically the case if, and only if, the wait is at least in the order of tens of milliseconds. You're good if you wait seconds on something. Still, warmup and cache effects will occur and ruin the applicability of your measurements to real-world performance any day.
In terms of measuring "exact CPU time", one can take the approach as detailed by Joachim Sauer's answer. When using JMH, one can measure CPU usage externally and then average against the number of iterations measured, however as this will include the harness' overhead that approach is fine for comparative measurements, but not suitable to derive a "my function xy, on average, takes such-and-such number of CPU seconds on each iteration on the CPU architecture I've used.". On a modern CPU and JVM such an observation is virtually impossible to make.
There are a number of profilers (Jprofile, Jprobe, Yourkit) available to analyze such data. And not only this, but much more...(such as memory utilization, thread details, etc.)
you could look for your answer here:
How do I time a method's execution in Java?
there are many examples to calculate method's execution time
精彩评论