I am reading a program which contains the following function, which is
int f(int n) 开发者_运维技巧{
int c;
for (c=0;n!=0;++c)
n=n&(n-1);
return c;
}
I don't quite understand what does this function intend to do?
It counts number of 1's in binary representation of n
The function is INTENDED to return the number of bits in the representation of n. What is missed out in the other answers is, that the function invokes undefined behaviour for arguments n < 0. This is because the function peels the number away one bit a time, starting from the lowest bit to the highest. For a negative number this means, that the last value of n before the loop terminates (for 32-bit integers in 2-complement) is 0x8000000. This number is INT_MIN and it is now used in the loop for the last time:
n = n&(n-1)
Unfortunately, INT_MIN-1 is a overflow and overflows invoke undefined behavior. A conforming implementation is not required to "wrap around" integers, it may for example issue an overflow trap instead or leave all kinds of weird results.
It is a (now obsolete) workaround for the lack of the POPCNT instruction in non-military cpu's.
This counts the number of iterations it takes to reduce n
to 0
by using a binary and.
The expression n = n & (n - 1)
is a bitwise operation which replaces the rightmost bit '1' to '0' in n
.
For examle, take an integer 5 (0101). Then n & (n - 1)
→ (0101) & (0100)
→ 0100
(removes first '1' bit from right side).
So the above code returns the number of 1's in binary form of given integer.
It shows a way how not to program(for the x86 Instruction set), using a intrinsic/inline assembler instruction is faster and better to read for something simple like this. (but this is only true for a x86 Architecture as far as i know, i don't know how's it about ARM or SPARC or something else)
Could it be it tries to return the number of significant bits in n? (Haven't thought through it completely...)
精彩评论