开发者

integer size in c depends on what?

开发者 https://www.devze.com 2023-02-16 10:10 出处:网络
Size of the integer depends on what? Is the size of an int variable in C dependent on the mach开发者_如何学Cine or the compiler?It\'s implementation-dependent.The C standard only requires that:

Size of the integer depends on what?

Is the size of an int variable in C dependent on the mach开发者_如何学Cine or the compiler?


It's implementation-dependent. The C standard only requires that:

  • char has at least 8 bits
  • short has at least 16 bits
  • int has at least 16 bits
  • long has at least 32 bits
  • long long has at least 64 bits (added in 1999)
  • sizeof(char) ≤ sizeof(short) ≤ sizeof(int) ≤ sizeof(long) ≤ sizeof(long long)

In the 16/32-bit days, the de facto standard was:

  • int was the "native" integer size
  • the other types were the minimum size allowed

However, 64-bit systems generally did not make int 64 bits, which would have created the awkward situation of having three 64-bit types and no 32-bit type. Some compilers expanded long to 64 bits.


Formally, representations of all fundamental data types (including their sizes) are compiler-dependent and only compiler-dependent. The compiler (or, more properly, the implementation) can serve as an abstraction layer between the program and the machine, completely hiding the machine from the program or distorting it in any way it pleases.

But in practice compilers are designed to generate the most efficient code for given machine and/or OS. In order to achieve that the fundamental data types should have natural representation for the given machine and/or OS. In that sense, these representations are indirectly dependent on the machine and/or OS.

In other words, from the abstract, formal and pedantic point of view the compiler is free to completely ignore the data type representations specific to the machine. But it makes no practical sense. In practice compilers make full use of data type representations provided by the machine.

Still, if some data type is not supported by the machine, the compiler can still provide that data type to the programs by implementing its support at the compiler level ("emulating" it). For example, 64-bit integer types are normally available in 32-bit compilers for 32-bit machines, even though they are not directly supported by the machine. Back in the day the compilers would often provide compiler-level support for floating-point types for machines that were not equipped with floating-point co-processor (and therefore did not support floating-point types directly).


It depends primarily on the compiler. For example, if you have a 64-bit x86 processor, you can use an old 16-bit compiler and get 16-bit ints, a 32-bit compiler and get 32-bit ints, or a 64-bit compiler and get 64-bit ints.

It depends on the processor to the degree that the compiler targets a particular processor, and (for example) an ancient 16-bit processor simply won't run code that targets a shiny new 64-bit processor.

The C and C++ standards do guarantee some minimum size (indirectly by specifying minimum supported ranges):

char: 8 bits
short: 16 bits
long: 32 bits
long long: 64 bits

The also guarantee that the sizes/ranges are strictly non-decreasing in the following order: char, short, int, long, and long long1.

1long long is specified in C99 and C++0x, but some compilers (e.g., gcc, Intel, Comeau) allow it in C++03 code as well. If you want to, you can persuade most (if not all) to reject long long in C++03 code.


As MAK said, it's implementation dependent. That means it depends on the compiler. Typically, a compiler targets a single machine so you can also think of it as machine dependent.


AFAIK, the size of data types is implementation dependent. This means that it is entirely up to the implementer (i.e. the guy writing the compiler) to choose what it will be.

So, in short it depends on the compiler. But often it is simpler to just use whatever size it is easiest to map to the word size of the underlying machine - so the compiler often uses the size that fits the best with the underlying machine.


It depends on the running environment no matter what hardware you have. If you are using a 16bit OS like DOS, then it will be 2 bytes. On a 32 bit OS like Windows or Unix, it is 4 bytes and so on. Even if you run a 32 bit OS on a 64 bit processor, the size will be 4 bytes only. I hope this helps.


It depends on both the architecture (machine, executable type) and the compiler. C and C++ only guarantee certain minimums. (I think those are char: 8 bits, int: 16 bits, long: 32 bits)

C99 includes certain known width types like uint32_t (when possible). See stdint.h

Update: Addressed Conrad Meyer's concerns.


The size of an Integer Variable depends upon the type of compiler:

  • if you have a 16 bit compiler:

    size of int is 2 bytes
    char holds 1 byte
    float occupies 4 bytes
    
  • if you have a 32 bit compiler:

    size of each variable is just double of its size in a 16 bit compiler

    int hold 4 bytes
    char holds 2 bytes
    float holds 8 bytes 
    

Same thing happens if you have a 64 bit compiler, and so on.

0

精彩评论

暂无评论...
验证码 换一张
取 消