I have a string:
"the quic"
Which is passed on to a function where the bits that make up the string are stored in an unsigned __int64. Resulting in the following output:
0111010001101000011001010010000001110001011101010110100101100011
However when I pass a string containing these values:
0xDE, 0x10, 0x9C, 0x58, 0xE8, 0xA4, 0xA6, 0x30, '\0'
The output isn't as correct as I expected:
1111111111111111111111111111111111111111111111111010011000110000
I'm using the same code as in the first string, which reads:
(((unsigned __int64)Message[0]) << 56) | (((unsigned __int64)Message[1]) << 48) |
(((unsigned __int64)Message[2]) << 40) | (((unsigned __int64)Message[3]) << 32) | (((unsigned __int64)Message[4]) << 24) | (((开发者_StackOverflowunsigned __int64)Message[5]) << 16) | (((unsigned __int64)Message[6]) << 8) | (((unsigned __int64)Message[7]));
I guess you did something like this,
char a[] = {0xDE, 0x10, 0x9C, 0x58, 0xE8, 0xA4, 0xA6, 0x30};
change it to unsigned char will solve your problem,
unsigned char a[] = {0xDE, 0x10, 0x9C, 0x58, 0xE8, 0xA4, 0xA6, 0x30};
I've tried both, the char version will be wrong in VC++, but unsigned version will be correct.
If you want to know the reason, look at a simpler version,
char a = 0xDE;
unsigned char b = 0xDE;
What's the difference? 0xDE is an int type. For the first one, you are converting int to signed char, for the second one, you are converting int to unsigned char.
From standard 4.7/2, 4.7/3
If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). —end note ]
The problem is the sign extension from char
to __int64
. If char
is signed (which it may or may not be, depending on the platform, but which it is on MSVC) then anything from 0x80 to 0xFF is a negative integer when stored as a char
. When it is converted to an __int64
(or even an unsigned __int64
) that is then converted as a negative value, which will have all the higher bits set.
signed char c=0x80; // plain "char" may be signed on your platform
assert(c==-128);
__int64 i=c;
assert(i==-128);
unsigned __int64 ui=c;
assert(ui==0xffffffffffffff80); // bit pattern for -128 extended to 64 bits
__int64 is not a standard keyword. My guess is that it's a signed 64 bit integer (despite using the "unsigned" keyword). This means that the cast from the char will sign-extend the value.
Try to use "unsigned long long", instead.
精彩评论