开发者

why does the code output -1 in place of 1

开发者 https://www.devze.com 2023-01-22 01:10 出处:网络
int main () { struct bit{ char f1:1; char f2:1; }; struct bit b; b.f1= 0x1; b.f2 = 0x1; printf(\"%d\\n\",b开发者_运维知识库.f1);
int main ()  
{  
struct bit{  
 char f1:1;  
 char f2:1;  
};  
struct bit b;  
b.f1= 0x1;   
b.f2 = 0x1;   
printf("%d\n",b开发者_运维知识库.f1);  
return 0;  
} 

compiled using gcc the code outputs -1. Should it not be 1? Is it because I am compiling on a little endian machine?

Added:While debugging using GDB i see that the value after initializing the struct members is -1. i.e. it is -1 before printing. Following is the printout from GDB:

(gdb) p b

$7 = {f1 = -1 '', f2 = -1 ''}

Let me know if you need any more debug commands. Please provide the commands for doing so.


char can be an unsigned or can be a signed type, it's up to the compiler to decide. In your case it's apparently signed so when you print your bit field the compiler extends the sign to the size of the int your bit is extended to. In 2 complement representation one should not forget that -1 is represented as every bit set 11111111 11111111 11111111 11111111 is -1 on 32 bit int. When you have only 1 bit you can represent only 2 values in 2 complement: 0 and 1 which is the binary representation of 0 and -1.

EDIT: Here the actual section of the C standard Chapter 6.2.5 section 15 : The three types char, signed char, and unsigned char are collectively called the character types. The implementation shall define char to have the same range, representation, and behavior as either signed char or unsigned char.35

35) CHAR_MIN, defined in limits.h, will have one of the values 0 or SCHAR_MIN, and this can be used to distinguish the two options. Irrespective of the choice made, char is a separate type from the other two and is not compatible with either.


Your bitfields are one bit wide and are signed values. The top most bit usually denotes the sign of the value, so, setting a 1 to the one bit wide signed value sets the sign bit so that reading the value gets you a -1.


To add to the correct answers of Skizz and tristopia: for C99, the modern C, bit fields should be of type signed or unsigned int or of type bool (aka _Bool). All other types may be allowed by some platform but are not necessarily portable. What is even worse, is that even if you specify them as plain int the result may be signed or unsigned. So better stick to bool if you just need a flag and to unsigned when you need more than one bit.


I believe KevinDTimm is right - bit fields are usually unsigned name:x.

This is what I think is happening: processors determine sign from the first bit of the field. gcc chars are signed (there is an unsigned char, so your code actually turns out to be:

struct bit 
{
   char f1;
   char f2;
};

And each char is one byte(4 bits), with the topmost byte indicating the sign (1=neg,0=pos). BUT since your fields are one bit, that's the sign bit. and you set it to be negative.

Hope this helps. If it's wrong please comment so I can fix it.


The above explanations are great. To make your program work, 1 possible solution may be to make the bit fields of type unsigned int.


bitfields are supposed to be typically 'unsigned xxxxx'.

[edit]
The reason for that is just what you encountered; somebody using a bitfield in an 'unusual' way, getting results that they should not get.

The point of a bitfield is to reflect a bit. What values are in a bit? 0 and 1 (I'm only speaking of normal bits, I can't grok the quantum stuff). Yet, you have found a way to insert -1, 0, 1 into that very same field. Somewhere it's got to break. I believe that a lot of bitfield confusion results in negative bitfields and so the unsigned modifier eases that confusion.

When you define your bitfield as an int, you can have negative values. That is the reason for your results above.
Also, please see here for a more detailed engagement of this topic. Note that this has been rehash innumerable times on SO and so a search for 'bitfield unsigned' would prove quite instructive
[/edit]

To 'R.', re: "signedness ... is implementation-defined, but it can't vary from one element to another."

#include <stdio.h>
int main ()
{
   struct bit{
      char f1:1;
      unsigned char f2:1;
   };
  struct bit b;
  b.f1 = 1;
  b.f2 = 1;
  printf("%d\n",b.f1);
  printf("%d\n",b.f2);
  return 0;
}

produces, as output:

-1
1


I don't know C very well. For example, I don't know what the :1 does in char f1:1, but I was able to get this code to work on my PC by removing that:

#include <stdio.h>

int main ()
{
 struct bit{
  char f1;
  char f2;
 };

 struct bit b;

 b.f1 = 0x1;
 b.f2 = 0x1;
 printf("%d\n",b.f1);
 return 0;
}

Output below:

chooper@brooklyn:~/test$ gcc -o foo foo.c
chooper@brooklyn:~/test$ ./foo
1

I hope this helps you!

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号