开发者

Computing Efficiency Question

开发者 https://www.devze.com 2023-01-26 21:03 出处:网络
I\'m wondering about computational efficiency. I\'m going to use Java in this example, but it is a general computing question. Lets say I have a string and I want to get the value of the first letter

I'm wondering about computational efficiency. I'm going to use Java in this example, but it is a general computing question. Lets say I have a string and I want to get the value of the first letter of the string, as a string. So I can do

String firstletter = String.valueOf(somestring.toCharArray()[0]);

Or I could do:

char[] stringaschar = somestring.toCharArray();
char firstchar = stringaschar[0];
String firstletter = String.valueOf(firstchar);

My question is, are the two ways essentially the same, computationally? I mean, the second way I explicitly had to create 2 intermediate variables, to be stored in memory (the stack?) temporarily. But the first way, too, the computer will have to still create the same variables, implicitly, right? And the number of operations doesn't change. My thinking is, the two ways are the same. But I'd like开发者_JS百科 to know for sure.


In most cases the two ways should produce the same, or nearly the same, object code. Optimizing compilers usually detect that the intermediate variables in the second option are not necessary to get the correct result, and will collapse the call graph accordingly.


This all depends on how your Java interpreter decides to translate your code into an intermediary language for runtime execution. It may actually have optimizations which translate the two approaches to be the same exact byte code.


The two should be essentially the same. In both cases you make the same calls converting the string to an array, finding the first character, and getting the value of the character. There may be minor differences in how the compiler handles these, but they should be insignficant.


The earlier answers are coincident and right, AFAIK.

However, I think there are a few additional and general considerations you should be aware of each time you wonder about the efficiency of any computational asset (code, for example).

First, if everything is under your strict control you could in principle count clock cycles one by one from assembly code. Or from some more abstract reasoning find the computational cost of an operation/algorithm.

So far so good. But don't forget to measure afterwards. You may find that measuring execution times is not so easy and straightforward, and sometimes is elusive (How to account for interrupts, for I/O wait, for network bottlenecks ...). But it pays. You ask here for counsel, but YOUR Compiler/Interpreter/P-code generator/Whatever could be set with just THAT switch in the third layer of your config scripts.

The other consideration, more to your current point is the existence of Black Boxes. You are not alone in the world and a Black Box is any piece used to run your code, which is essentially out of your control. Compilers, Operating Systems, Networks, Storage Systems, and the World in general fall into this category.

What we do with Black Boxes (they are black, either because their code is not public or because we just happen to use our free time fishing instead of digging library source code) is establishing mental models to help us understand how they work. (BTW, This is an extraordinary book about how we humans forge our mental models). But you should always beware that they are models, not the real thing. Models help us to explain things ... to a certain extent. Classical Mechanics reigned until Relativity and Quantum Mechanics fluorished. None of them is wrong They have limits, and so have all our models.

Even if you happen to be friend with your router OS, or your Linux kernel, when confronting an efficiency problem, design a good experiment and measure.

HTH!

NB: By design a good experiment I mean beware of the tar pits. Examples: measuring your measurement code instead the target of the experiment, being influenced by external factors, forget external factors that will influence the production code, test with data whose cardinality, orthogonality, or whatever-ality is dissimilar with the "real world", mapping wrongly the production and testing Client/server workhorses, et c, et c, et c.

So go, and meassure your code. Your results will be the most interesting thing in this page.

0

精彩评论

暂无评论...
验证码 换一张
取 消