I read somewhere that int data type gives better performance (as compared to long and short) regardless of the OS as its size gets modified according to the word size of the OS. Where as long and short occupy 4 and 2 bytes which may or may not match with the word size of OS. Could anyone give a good explanation of t开发者_StackOverflowhis?
From the standard:
3.9.1, §2 :
There are five signed integer types : "signed char", "short int", "int", "long int", and "long long int". In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment (44); the other signed integer types are provided to meet special needs.
So you can say char <= short <= int <= long <= long long.
But you cannot tell that a short is 2 byte and a long 4.
Now to your question, most compiler align the int to the register size of their target platform which make alignment easier and access on some platforms faster. But that does not mean that you should prefer int.
Take the data type according to your needs. Do not optimize without performance measure.
int
is traditionally the most "natural" integral type for the machine on which the program is to run. What is meant by "most natural" is not too clear, but I would expect that it would not be slower than other types. More to the point, perhaps, is that there is an almost universal tradition for using int
in preference to other types when there is no strong reason for doing otherwise. Using other integral types will cause an experienced C++ programmer, on reading the code, to ask why.
short only optimize storage size; calculations always extend to an int, if applicable (i.e. unless short is already same size)
not sure that int should be preferred to longs; the obvious case being when int's capacity doesn't suffice
You already mention native wordsize, so I'll leave that
Eskimos reportedly use forty or more different words for snow. When you only want to communicate that it's snow, then the word "snow" suffices. Source code is not about instructing the compiler: it's about communicating between humans, even if the communication may only be between your current and somewhat later self…
Cheers & hth.
int
does not give better performance than the other types. Really, on most modern platforms, all of the integer types will perform similarly, excepting long long
. If you want the "fastest" integer available on your platform, C++ does not give you a way to do that.
On the other hand, if you're willing to use things defined by C99, you can use one of the "fastint" types defined there.
Also, on modern machines, memory hierarchy is more important than CPU calculations in most cases. Using smaller integer types lets you fit more integers into CPU cache, which will increase performance in pretty much all cases.
Finally, I would recommend not using int
as a default data type. Usually, I see people reach for int
when they really want an unsigned integer instead. The conversion from signed to unsigned can lead to subtle integer overflow bugs, which can lead to security vulnerabilities.
Don't choose the data type because of an intrinsic "speed" -- choose the right data type to solve the problem you're looking to solve.
精彩评论