I did some searching on here and haven't found anything quite like this, so I'm going to go ahead and ask. This is really more about semantics than an actual programming question. I'm currently writing something in C++ but the language doesn't really matter.
I'm well aware that it's good programming practice to keep your functions/methods as short as possible. Yet how do you really know if a function is too long? Alternately, is it ever possible to break functions down too much?
The first programming language I learned (other than Applesoft BASIC, which doesn't count...) was 6502 assembly language, where speed and optimization is everything. In cases where a few cycle counts screws up the timing of your entire program, it's often better to set a memory location or register directly rather than jump to another subrouti开发者_运维百科ne. The former operation might take 3 or 4 cycles, while, altogether, the latter might take two or three times that.
While I realize that nowadays if I were to even mention cycle counts to some programmers they'd just give me a blank look, it's a hard habit to break.
Specifically, let's say (again using C++) we have a private class method that's something like the following:
int Foo::do_stuff(int x) {
this->x = x;
// various other operations on x
this->y = this->x;
}
I've seen some arguments that, at the very least, each set of operations should be its own function. For instance, do_stuff() should in theory be named to set_x(int x), a separate function should be written for the set of operations performed on class member x, and a third function should be written to assign the final value of class member x to class member y. But I've seen other arguments that EVERY operation should have its own function.
To me, this just seems wrong. Again, I'm looking at things from an internal perspective; every method call is pushing an address on the stack, performing its operations, then returning from the subroutine. This just seems like a lot of overhead for something relatively simple.
Is there a best practice for this sort of thing or is it more up to individual judgment?
Since the days of 6502 assembly, two things have happened: Computers have got much faster, and compilers (where appropriate) have become smarter.
Now the advice is to stop spending all your time fretting about the individual cycles, until you are sure that it is a problem. You can spend that time more wisely. If you mention cycle counts to me, I won't look at you blankly because I don't know what they are. I will look at you wondering if you are wasting your effort.
Instead, start thinking about making your functions small enough to be:
- understandable,
- testable,
- re-usable, maybe, where that is appropriate.
If, later, you find some of your code isn't running fast enough, consider how to optimise it.
Note: The optimisation might be to hint to the compiler to move the function inline, so you still get the advantages above, without the performance hit.
The most important thing about deciding where to break up a function is not necessarily how much the function does. It is rather about determining the API of your class.
Suppose we break Foo::do_stuff
into Foo::set_x
, Foo::twiddle_x
, and Foo::set_y
. Does it ever make sense to do these operations separately? Will something bad happen if I twiddle x
without first setting it? Can I call set_y
without calling set_x
? By breaking these up into separate methods, even private methods within the same class, you are implying that they are at least potentially separate operations.
If that's not the case, then by all means keep them in one function.
I'm well aware that it's good programming practice to keep your functions/methods as short as possible
I wouldn't use the above criteria to refactor my functions to smaller ones. Below is what I use
- Keep all the functions at same level of abstraction
- make sure there are no side effects (for exceptional cases make sure to explicitly document them)
- Make sure a function is not doing more than one thing (SRP Principle). But you can break this to honor 1.
Other good practices for Method Design
- Don't Make the Client Do Anything the Module Could Do
- Don't Violate the Principle of Least Astonishment
- Fail Fast–Report Errors as Soon as Possible After They Occur
- Overload With Care
- Use Appropriate Parameter and Return Types
- Use Consistent Parameter Ordering Across Methods
- Avoid Long Parameter Lists
- Avoid Return Values that Demand Exceptional Processing
After reading Clean Code: A Handbook of Agile Software Craftsmanship which touched on almost every piece of advice on this page I started writing shorter. Not for the sake of having short functions but to improve readability and testability, keep them at the same level of abstraction, have them do one thing only, etc.
What I've found most rewarding is that I find myself writing a lot less documentation because it's just not necessary for 80% of my functions. When the function does only what its name says, there's no point in restating the obvious. Writing tests also becomes easier because each test method can set up less and perform fewer assertions. When I revisit code I've written over the past six months with these goals in mind, I can more quickly make the change I want and move on.
i think in any code base, readability is much greater a concern than a few more clock cycles for all the well-known reasons, foremost maintainability, which is where most code spends its time, and as a result where most money is spent on the code. so im kind of dodging your question by saying that people dont concern themselves with it because its negligible in comparison to other [more corporate] concerns.
although if we put other concerns aside, there are explicit inline statements, and more importantly, compiler optimisations that can often remove most of the overhead involved with trivial function calls. the compiler is an incredibly smart optimising machine, and will often organise function calls much more intelligently than we could.
精彩评论