So I'm just tinkering around with C and wanted to see if I could assign a binary value to an integer and use the printf() function to output either a signed or unsigned value. But regardless I get the same output, I thought I'd get half the value for printing the signed compared to the unsigned. I'm using Code::blocks and GCC.
Does printf() ignore the %i & %u and use the variable definition?
Sample Code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
signed int iNumber = 0b1111111111111111;
printf("Signed Int : %i\n", iNumber);
printf("Unsigned In开发者_运维问答t : %u\n", iNumber);
return 0;
}
Same result if I change the int to unsigned:
#include <stdio.h>
#include <stdlib.h>
int main()
{
unsigned int iNumber = 0b1111111111111111;
printf("Signed Int : %i\n", iNumber);
printf("Unsigned Int : %u\n", iNumber);
return 0;
}
I assume charCount
should be iNumber
. Both programs have undefined behavior, since you're using the wrong conversion specifier once.
In practice (for most implementations), printf
relies on you to tell it what to pop off the stack; this is necessary because it's a variable argument function. va_arg
takes the type to pop as the second parameter
The bits are the same between the programs before and after assignment. So printf
is pointing to the same bits with pointers of different types.
The reason you get the same result for %i
and %u
is that the leftmost bit is unset so it's interpreted as positive for both specifiers.
Finally, you should note that binary literals (0b
) are a GCC extension, not standard C.
Because for positive numbers, the binary representation on most platforms is the same between signed and unsigned.
First, you seem to be printing something called charCount
, without error, and that b
way of specifying a number isn't standard C. I'd check to be sure it was doing what you think it is. For a standard way of specifying bit patterns like that, use octal (number begins with a zero) or hex (number begins with a zero and x
) formats.
Second, almost all computers have the same binary representation for a positive integer and its unsigned equivalent, so there will be no difference. There will a difference if the number is negative, and that depends typically on the most significant bit. Your int
s could be of any size from 16 bits on up, although on a desktop, laptop, or server it's very probably 32, and almost certainly either 32 or 64.
Third, printf()
knows nothing about the type of the data you pass to it. When called, it is even incapable of knowing the sizes of its arguments, or how many there are. It derives that from the format specifiers, and if those don't agree with the arguments passed there can be problems. It's probably the worst thing about printf()
.
The result will be different only for values higher than 0x7FFFFFFF as it is on almost all systems the bit with highest weight that is used to mark the sign. For faster computations inside the CPU, the other bits are reversed so 0xFFFFFFFF is -1 and 0x80000000 is -MAXINT-1 while 0x7FFFFFFF is MAXINT.
You probably have 32-bit ints. In this case, That means your iNumber has a value of 32767 regardless of whether it is signed or unsigned.
精彩评论