开发者

Why the sizeof(bool) is not defined to be one, by the Standard itself?

开发者 https://www.devze.com 2023-02-12 15:06 出处:网络
Size of char, signed char and unsigned char is defined to be 1 byte, by the C++ Standard itself. I\'m wondering why it didn\'t define the sizeof(bool) also?

Size of char, signed char and unsigned char is defined to be 1 byte, by the C++ Standard itself. I'm wondering why it didn't define the sizeof(bool) also?

C++03 Standard $5.3.3/1 says,

sizeof(char), sizeof(signed char) and sizeof(unsigned char) are 1; the result of sizeof applied to any other fundamental type (3.9.1) is implementation-defined. [Note: in particular,sizeof(bool) and sizeof(wchar_t) are implementation-defined.69)

I understand the rationale that sizeof(bool) cannot be less than one byte. But is there any rationale why it should be greater than 1 byte eithe开发者_开发知识库r? I'm not saying that implementations define it to be greater than 1, but the Standard left it to be defined by implementation as if it may be greater than 1.

If there is no reason sizeof(bool) to be greater than 1, then I don't understand why the Standard didn't define it as just 1 byte, as it has defined sizeof(char), and it's all variants.


The other likely size for it is that of int, being the "efficient" integer type for the platform.

On architectures where it makes any difference whether the implementation chooses 1 or sizeof(int) there could be a trade-off between size (but if you're happy to waste 7 bits per bool, why shouldn't you be happy to waste 31? Use bitfields when size matters) vs. performance (but when is storing and loading bool values going to be a genuine performance issue? Use int explicitly when speed matters). So implementation flexibility wins - if for some reason 1 would be atrocious in terms of performance or code size, it can avoid it.


As @MSalters pointed out, some platforms work more efficiently with larger data items.

Many "RISC" CPUs (e.g., MIPS, PowerPC, early versions of the Alpha) have/had a considerably more difficult time working with data smaller than one word, so they do the same. IIRC, with at least some compilers on the Alpha a bool actually occupied 64 bits.

gcc for PowerPC Macs defaulted to using 4 bytes for a bool, but had a switch to change that to one byte if you wanted to.

Even for the x86, there's some advantage to using a 32-bit data item. gcc for the x86 has (or at least used to have -- I haven't looked recently at all) a define in one of its configuration files for BOOL_TYPE_SIZE (going from memory, so I could have that name a little wrong) that you could set to 1 or 4, and then re-compile the compiler to get a bool of that size.

Edit: As for the reason behind this, I'd say it's a simple reflection of a basic philosophy of C and C++: leave as much room for the implementation to optimize/customize its behavior as reasonable. Require specific behavior only when/if there's an obvious, tangible benefit, and unlikely to be any major liability, especially if the change would make it substantially more difficult to support C++ on some particular platform (though, of course, if the platform is sufficiently obscure, it might get ignored).


Many platforms cannot effectively load values smaller than 32 bits. They have to load 32 bits, and use a shift-and-mask operation to extract 8 bits. You wouldn't want this for single bools, but it's OK for strings.


The operation resulted in 'sizeof' is MADUs (minimal addresible unit), not bytes. So family processors C54 *. C55 * Texas Instuments, the expression 1 MADU = 2 bytes.

For this platform sizeof (bool) = sizeof (char) = 1 MADUs = 2 bytes. This does not violate the C + + standard, but clarifies the situation.

0

精彩评论

暂无评论...
验证码 换一张
取 消