I've got a simple benchmarking function using the C clock()
func开发者_运维百科tion.
start[timerId]=clock();
clock_t end;
float dif_sec;
end=clock();
dif_sec=((float)end-start[timerId])/CLOCKS_PER_SEC;
printf("%s: %f seconds\n", msg, dif_sec);
It's running fine in 32 bits on Mac OS X, but when I compile in 64 bits, the results are all wrong. Why?!
Here's what I got from a pure C version of your code (MacOS X 10.5.8 - Leopard; MacBook Pro):
#include <time.h>
#include <stdio.h>
int main(void)
{
clock_t start = clock();
clock_t end = clock();
float dif_sec = ((float)end-start)/CLOCKS_PER_SEC;
printf("%s: %f seconds\n", "difference", dif_sec);
printf("%s: %d\n", "CLOCKS_PER_SEC", CLOCKS_PER_SEC);
return(0);
}
This is how I compiled it, and ran it, and the results I got:
Osiris JL: gcc -m32 -o xxx-32 xxx.c
Osiris JL: gcc -m64 -o xxx-64 xxx.c
Osiris JL: ./xxx-32
difference: 0.000006 seconds
CLOCKS_PER_SEC: 1000000
Osiris JL: ./xxx-64
difference: 0.000009 seconds
CLOCKS_PER_SEC: 1000000
Osiris JL: ./xxx-64
difference: 0.000003 seconds
CLOCKS_PER_SEC: 1000000
Osiris JL: ./xxx-32
difference: 0.000003 seconds
CLOCKS_PER_SEC: 1000000
Osiris JL:
Looks sort of OK - rather fast, perhaps, but that's OK; it is a nice new machine, and it doesn't take long to make two system calls in a row.
精彩评论