开发者

Why use integers smaller than 32bit?

开发者 https://www.devze.com 2022-12-09 22:39 出处:网络
I always like to use the variable with the smallest size that will work just fine, But wouldthis really gain me if I used short byte integers instead of integer, and the memory is 32bit word addressab

I always like to use the variable with the smallest size that will work just fine, But would this really gain me if I used short byte integers instead of integer, and the memory is 32bit word addressabl开发者_JS百科e, Does the compiler do something to enhance memory usage ?


For local variables, it probably doesn't make much as much sense, but using smaller integers in structures where you have thousands or even millions of items, you can save a considerable amount of memory.


No, int was chosen to be the fastest int type for mopdern 32/64 bit architectures, using shorter (short, sbyte) types will only cost you performance.

You can sometimes save on memory, but only when using large arrays or lists. And even then it usually doesn't pay.

Calculation with 8 bits:

sbyte a, b, c;
a = (sbyte) (b + c);

The typecast is required and carries a runtime cost.


If it is a plain variable, nothing is gained by using a shorter width, and some performance may get lost. The compiler will automatically widen storage to a full processor word, so even if you only declare 16 bits, it likely takes 32 bits on the stack. In addition, the compiler may need to perform certain truncation operations in some cases (e.g. when the field is part of a struct); these can cause a slight overhead.

It really only matters for structs and arrays, i.e. if you have many values. For a struct, you may save some memory, at the expense of the overhead I mention above. Plus, you may be forced to use a smaller size if the struct needs to follow some external layout. For an array, memory savings can be relevant if the array is large.


Normally, stick with int etc.

In addition to the other answers; there are also cases where you intentionally only want to support the given data-size, because it represents some key truth about the data. This may be key when talking to external systems (in particular interop, but also databases, file-formats, etc), and might be mixed with checked arithmetic - to spot overflows as early as possible.


To be honest memory consumption is probably not the most compelling reason to use small ints (in this example). But there is a general principle at stake that says yes you should use just the memory required for your data structures.

The principle is this, allocate only the width that your data requires and let the compiler find any overflow bugs that may occur, its an additional debugging technique that is very effective. If you know that a value should never exceed a threshold then only allocate up to that threshold.

0

精彩评论

暂无评论...
验证码 换一张
取 消