I was having开发者_如何学编程 a look over this page: http://www.devbistro.com/tech-interview-questions/Cplusplus.jsp, and didn't understand this question:
What’s potentially wrong with the following code?
long value; //some stuff value &= 0xFFFF;
Note: Hint to the candidate about the base platform they’re developing for. If the person still doesn’t find anything wrong with the code, they are not experienced with C++.
Can someone elaborate on it?
Thanks!
Several answers here state that if an int
has a width of 16 bits, 0xFFFF
is negative. This is not true. 0xFFFF
is never negative.
A hexadecimal literal is represented by the first of the following types that is large enough to contain it: int
, unsigned int
, long
, and unsigned long
.
If int
has a width of 16 bits, then 0xFFFF
is larger than the maximum value representable by an int
. Thus, 0xFFFF
is of type unsigned int
, which is guaranteed to be large enough to represent 0xFFFF
.
When the usual arithmetic conversions are performed for evaluation of the &
, the unsigned int
is converted to a long
. The conversion of a 16-bit unsigned int
to long
is well-defined because every value representable by a 16-bit unsigned int
is also representable by a 32-bit long
.
There's no sign extension needed because the initial type is not signed, and the result of using 0xFFFF
is the same as the result of using 0xFFFFL
.
Alternatively, if int
is wider than 16 bits, then 0xFFFF
is of type int
. It is a signed, but positive, number. In this case both operands are signed, and long
has the greater conversion rank, so the int
is again promoted to long
by the usual arithmetic conversions.
As others have said, you should avoid performing bitwise operations on signed operands because the numeric result is dependent upon how signedness is represented.
Aside from that, there's nothing particularly wrong with this code. I would argue that it's a style concern that value
is not initialized when it is declared, but that's probably a nit-pick level comment and depends upon the contents of the //some stuff
section that was omitted.
It's probably also preferable to use a fixed-width integer type (like uint32_t
) instead of long
for greater portability, but really that too depends on the code you are writing and what your basic assumptions are.
I think depending on the size of a long the 0xffff literal (-1) could be promoted to a larger size and being a signed value it will be sign extended, potentially becoming 0xffffffff (still -1).
I'll assume it's because there's no predefined size for a long, other than it must be at least as big as the preceding size (int). Thus, depending on the size, you might either truncate value to a subset of bits (if long is more than 32 bits) or overflow (if it's less than 32 bits).
Yeah, longs (per the spec, and thanks for the reminder in the comments) must be able to hold at least -2147483647 to 2147483647 (LONG_MIN and LONG_MAX).
For one value isn't initialized before doing the and so I think the behaviour is undefined, value could be anything.
long type size is platform/compiler specific.
What you can here say is:
- It is signed.
- We can't know the result of value &= 0xFFFF; since it could be for example value &= 0x0000FFFF; and will not do what expected.
While one could argue that since it's not a buffer-overflow or some other error that's likely to be exploitable, it's a style thing and not a bug, I'm 99% confident that the answer that the question-writer is looking for is that value
is operated on before it's assigned to. The value is going to be arbitrary garbage, and that's unlikely to be what was meant, so it's "potentially wrong".
Using MSVC I think that the statement would perform what was most likely intended - that is: clear all but the least significant 16 bits of value, but I have encountered other platforms which would interpret the literal 0xffff as equivalent to (short)-1, then sign extend to convert to long, in which case the statement "value &= 0xFFFF" would have no effect. "value &= 0x0FFFF" is more explicit and robust.
精彩评论