I know that computers these days are rather fast and low efficiency tweaks, such as I'm about to ask, don't really have that much importance, but I think it's still good to know.
int something;开发者_如何学Go something = 5;
or
int something = 5;
If the compiler compiles the two pieces of code differently then which of the two above pieces of code is more efficient. It will probably differ from compiler to compiler but I'm mainly interested in gcc.
In these days, when you turn on optimizations, you (pretty much) can't predict ANYTHING about the generated code. Believe it or not, your code is describing the ends, not the means! So it doesn't make much sense to predict how it'll execute, especially after optimizations -- all that C guarantees is that it'll give you the result you asked for.
And before optimizations, it doesn't make sense to worry about it.
And your code is trivial for the compiler to optimize, so don't worry about it.
Start thinking about more important things in your program. :)
Even with optimization off, it would generate the same code. Declaring variables is for the benefit of the compiler; it doesn't directly generate code.
Actually, with experience you CAN PREDICT what an optimizer will do. Esp for something like this.
I recommend you try it yourself, only takes about 30 seconds or so to see the results, start to finish.
The phrasing of your question doesnt make a whole lot of sense, the computers themselves do not perform efficiency tweaks (in reference to what happens to some C code) the programs that run on those computers (compiler for example) are what do or dont create the efficiency. Because computers keep getting faster and have more memory, and the human programmer is willing to sit and wait on the compiler, the compilers have the ability to continue to try to generate more efficient programs.
You used the term efficiency and not optimization, I assume you are curious to know what an optimizer will do with that code? And you didnt specify what level of optimization you were interested in, so:
first case:
int something = 5;
second case:
int something;
...
something = 5;
You also didnt specify local variables or global, the compilers can/will behave differently for the two.
When it is a local variable it is easier. You will get the same thing. With no optimization it will allocate some space on the stack and generate some code to create and store that constant in that location on the stack. Wherever used it will load from or store to that location on the stack (eventually). A little optimization, it may still use the stack but may not be as eager to keep using that memory as the home base for the variable, it may use a register a lot and it wont store and load back everywhere (no optimization is volatile like BTW). With good optimization, depending on the target instruction set, depending on what the compiler knows about the target (it may know that a load from ram to a register is slower than an immediate, or it may be that using the immediate is worse or equal) in the simplest case (the variable gets the one assignment and is on the right side, operand side, of the equals sign from there on out) the compiler wont necessarily allocate stack space or even use a register if the instruction set allows, it will encode the immediate wherever it can.
When used as a global variable:
first case:
int something = 5;
int main ( void )
{
second case:
int something;
int main ( void )
{
something = 5;
The first case is going to allocate space in the .data segment, the second case the variable will be allocated from the .bss, the zero init, segment. That will be your first difference.
Depending on other variables, how the operating system works, the format of the executable, etc there may be a different amount of work/execution of the os. For example if you have many other variables and this is the only variable in the .data segment, then the first case will require extra work to read the data segment in the binary file, placing that data in ram. Where if all of your variables were in the zero init, zeroing one extra variable is minimal compared to all the code and time and hardware to read that extra portion of the binary file.
And the opposite is true, if all of your variables are in the .data segment except for this one in the .bss, it costs the operating system extra work to read the binary file to find out where and how much .bss memory to clear. If everything is in .data then it is a read and copy from the file to ram, not nearly as efficient as having everything in .bss, but if this was the only one in .bss there is a noticeable performance hit compared to all variables in .data.
With low/no optimization, the second case will generate code to use an immediate and write that constant to ram, where the first case will not as it is already there from when the binary was loaded.
When you get into the better optimization the result depends heavily on how you use this variable, how much code is around it, how widely spaced the uses of this variable are, if after the assignment the variable is ever on the left side of an equals (gets the result of an operation vs always being an operand). the target comes into play and a number of other factors. you can still make pretty good predictions on what will happen if you know your compiler and target, but it wont always be the same for every possible program size or usage of that variable. Sometimes both cases will only encode immediates in the instructions and never use ram or a register. Sometimes both cases will load the value into a register using an immediate and use that register through out, even if the variable gets the result of an operation. Sometimes a register is used for a while then the value is evicted to ram, because the compiler needs more registers to implement the next segment of code, then later loads the variable from ram into the same or some other register to complete the encoding of the function.
Some compilers can optimize across functions and files in the project and know for example that you never ever use that variable as a result and can safely never store anything to the ram allocated. Some cannot and will use an immediate throughout the function, but before returning it will need to make one store to ram for that variable, just in case some other function called after this function uses that variable.
Personally I only ever use the second case. One, it is cleaner, more readable. Second you get that lack of a .data segment efficiency in loading. But mostly because I write a lot of embedded code and write my own loaders and startup routines. If you make a rule of never initializing in the declaration, and you always write code to assign your variable a value before you use that value (it is a very good thing that compilers now warn you when you use a variable before assigning it even though the assumption is that it is zero) then basically the loader never has a .data segment to deal with, which is esp nice if you run out of rom, and second the loader/startup never has to zero init any memory. You can end up burning more binary/flash size/space than if you had a .data or .bss or both, and it can cost you some execution time. Not always, what this trick/hack does though is it is much cleaner code over all and much less risky, more reliable, more portable as you dont have to get into the nuances of different linkers linker (in order to get your .bss and .data info placed for the startup code or loader to use)(esp if you run from flash/rom).
Most folks do not use a lot of global variables, I suspect most folks dont know that they are not using the optimizer and running several times slower than they could, hopefully you are. So if you are talking about local variables and you normally use some form of optimization, you will NOT see a difference between those two cases and that is why most of the other folks gave you the "there is no difference" answer. If you do see a difference it is because you are not using optimization because of something else you have done (compiled for debug for example), or because the optimizer is not very good, or the instruction set and compiler know that a load from memory is faster than or the same speed as using an immediate (this can depend heavily on the size of the immediate and the processor, ARM for example).
It really shouldn't make any difference - the compiler moves code around so much you might not even recognise your own program. That said, why not find out for yourself?
#include <time.h>
#include <stdio.h>
int main ()
{
clock_t start, end;
double runTime;
start = clock();
int i;
for (i=0; i<10000000; i++}
{
int something;
something = 5;
}
end = clock();
runTime = (end - start) / (double) CLOCKS_PER_SEC ;
printf ("Run time one is %g seconds\n", runTime);
clock_t start, end;
double runTime;
start = clock();
int i;
for (i=0; i<10000000; i++}
{
int something = 5;
}
end = clock();
runTime = (end - start) / (double) CLOCKS_PER_SEC ;
printf ("Run time two is %g seconds\n", runTime);
getchar();
return 0;
}
During the Process of Compilation :
> Both will contribute the same number of tokens and would mean the same.
> It would have 2 statements to compile and optimize.
> Since, in compilation it flows through a state machine, so it would even be difficult to notice the difference.
During the Process of Execution :
> Since, both statements mean the same, during optimization stage of compiler both would become equivalent.
> So they will compile to exactly same assembly code.
> In affect, there won't be any difference at all after obj file is generated.
So, these things never make a difference.
As I know in compiler optimization , the compiler doesn't reserve the declared variable if not defined , so I think
int something = 5;
Better than the first because it reduces the memory and handling efforts for the compiler when retrieving the variable "something" from
精彩评论