开发者

Is it better to use integers as loop counter variables?

开发者 https://www.devze.com 2022-12-20 01:16 出处:网络
I remember reading somewhere that it is better to use int开发者_StackOverflow中文版egers as loop counter variables rather than char or short. If yes, why? Does it provide any optimization benefits?Gen

I remember reading somewhere that it is better to use int开发者_StackOverflow中文版egers as loop counter variables rather than char or short. If yes, why? Does it provide any optimization benefits?


Generally, the compiler will make int to be a good size for putting into one of your CPU's general purpose registers. That generally leads to fast access.

Of course there's no guarantee of anything. The compiler is free to do a lot of things, including, I would guess, promote some of your code that uses char to some larger type. So the difference might not even matter.

Really, for the answer that's true for your compiler, you should look at the assembly it outputs.


In 32-bit architecture operations with 4 bytes (int) variables are usually faster. This is mainly due to size of registers and memory alignment. In 64 bit architecture it int will (should) be automatically made 64-bit integer.


Alexey* is right, it's usually faster to use a type that's the same width as the architecture (i.e. an int32 for a 32 bit system)

Also, if you use a char i.e.

for(char i=0;i<max;++i)

there's a slight chance that you (or a colleague) will come back to the code in a month's time and change max to something high, causing an overflow and an annoying bug ;)

Sam

*and everyone else who answered while I was writing this!


The int type can generally be expected to be the fastest implemented integer type on your platform, and should therefore be the default choice.

From K&R, 2nd edition, p. 36:

int: an integer, typically reflecting the natural size of integers on the host machine.


It would be even better to use the size_t type for loop counters. It will scale to 64 bits.


Ever notice how the C standard is kind of iffy about what size integers are? This fact drives device driver engineers and those working on communications protocols into a tizzy as they believe the language should clearly define the size of objects.

C says that an int is the natural size for the implementation architecture. That means it will be handled at least as efficiently as any other size. Take the x86 architecture: A short (16 bit integer) used in a 32-bit program contains instructions performing an extra "size override" modification. So the code has more bytes, though usually no performance penalty. Unless the extra bytes cause a cache line overflow....

The x86 generated code for a char counter usually includes masking after increment to ensure it remains 8 bits. It might seem that using a smaller variable would be smaller and tighter, but that isn't the case for x86, and several other common CPUs.


Worry about getting your loops right before worrying about this stuff. You're much more likely to have an off-by-one error in your loop bounds than a measurable difference in speed or code size by changing the type of the loop counter between int, char or short. So worry about correctness first.

That said, default to using int or unsigned int unless you have reason to do otherwise - but I say this because you're less likely to need to worry about overflow or wrap around with the larger type not because it might be faster (even though it might).


In some instances, overflow problems with chars (or even shorts) will produce non-terminating loops.


It really depends on the platform that you are writing to code for. The best type to use is to match it to your platform. Ie, if you are writing code for a simple 8bit micro, perhaps using an uint8_t is better than using uint16_t.


Usually, int is the right choice for looping. There are two reasons it might not be:

  1. It might be larger than necessary. SDCC supports some 8 bit processors such as the Z80, which allow 8 bit access to registers, although sizeof(int)=2. If you don't need more than 8 bits for your loop variable, then using chars, with sizeof(char)=1, allows the optimiser to cram more into register space, resulting in faster code.
  2. It might be too small. If ints are 16 bits, then you might well have loops that run more than that many times.

So, yes, you may have to think about how big ints are, depending on your architecture. But usually you don't.


It really depends on what your loop is doing. Take this loop for example:

typedef struct _SomeData
{
  /* some data here */
} SomeData;

void SomeFunction (SomeData *array_of_data, int number_of_elements)
{
  int i;

  for (i = 0 ; i < number_of_elements ; ++i)
  {
     DoSomethingWithData (&array_of_data [i]);
  }
}

Some compilers will do a good job of optimising the above. But some compilers, especially compilers for niche embedded microprocessors, will generate horribly inefficent code. It can be re-written as:

void SomeFunction (SomeData *array_of_data, int number_of_elements)
{
  SomeData *current = array_of_data, *end = &array_of_data [number_of_elements];

  while (current < end)
  {
    DoSomethingWithData (current++);
  }
}

which doesn't use integers at all and is more likely to generate good output regardless of the compiler (modern compilers are likely to optimise the integer version to something like the above anyway).

So, integers are not always necessary for loops, and the best optimisation is always to not do something that not necessary. If, however, the loop explicitly needs the integer then the int type would generally provide the best result.

Of course, you must always use a profiler to be sure about the performance of code and whether or not changing the type makes any difference.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号