开发者

Java method keyword "final" and its use

开发者 https://www.devze.com 2023-02-04 16:52 出处:网络
When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example:

When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example:

interface Garble {
  int zork();
}

interface Gnarf extends Garble {
  /**
   * This is the same as calling {@link #zblah(0)}
   */
  int zblah();
  int zblah(int defaultZblah);
}

And then

abstract class AbstractGarble implements Garble {
  @Override
  public fi开发者_开发百科nal int zork() { ... }
}

abstract class AbstractGnarf extends AbstractGarble implements Gnarf {
  // Here I absolutely want to fix the default behaviour of zblah
  // No Gnarf shouldn't be allowed to set 1 as the default, for instance
  @Override
  public final int zblah() { 
    return zblah(0);
  }

  // This method is not implemented here, but in a subclass
  @Override
  public abstract int zblah(int defaultZblah);
}

I do this for several reasons:

  1. It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy)
  2. I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it.

So the final keyword works perfectly for me. My question is:

Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?


Why is it used so rarely in the wild?

Because you should write one more word to make variable/method final

Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

Usually I see such examples in 3d part libraries. In some cases I want to extend some class and change some behavior. Especially it is dangerous in non open-source libraries without interface/implementation separation.


I always use final when I write an abstract class and want to make it clear which methods are fixed. I think this is the most important function of this keyword.

But when you're not expecting a class to be extended anyway, why the fuss? Of course if you're writing a library for someone else, you try to safeguard it as much as you can but when you're writing "end user code", there is a point where trying to make your code foolproof will only serve to annoy the maintenance developers who will try to figure out how to work around the maze you had built.

The same goes to making classes final. Although some classes should by their very nature be final, all too often a short-sighted developer will simply mark all the leaf classes in the inheirance tree as final.

After all, coding serves two distinct purposes: to give instructions to the computer and to pass information to other developers reading the code. The second one is ignored most of the time, even though it's almost as important as making your code work. Putting in unnecessary final keywords is a good example of this: it doesn't change the way the code behaves, so its sole purpose should be communication. But what do you communicate? If you mark a method as final, a maintainer will assume you'd had a good readon to do so. If it turns out that you hadn't, all you achieved was to confuse others.

My approach is (and I may be utterly wrong here obviously): don't write anything down unless it changes the way your code works or conveys useful information.


Why is it used so rarely in the wild?

That doesn't match my experience. I see it used very frequently in all kinds of libraries. Just one (random) example: Look at the abstract classes in:

http://code.google.com/p/guava-libraries/

, e.g. com.google.common.collect.AbstractIterator. peek(), hasNext(), next() and endOfData() are final, leaving just computeNext() to the implementor. This is a very common example IMO.

The main reason against using final is to allow implementors to change an algorithm - you mentioned the "template method" pattern: It can still make sense to modify a template method, or to enhance it with some pre-/post actions (without spamming the entire class with dozens of pre-/post-hooks).

The main reason pro using final is to avoid accidental implementation mistakes, or when the method relies on internals of the class which aren't specified (and thus may change in the future).


I think it is not commonly used for two reasons:

  1. People don't know it exists
  2. People are not in the habit of thinking about it when they build a method.

I typically fall into the second reason. I do override concrete methods on a somewhat common basis. In some cases this is bad, but there are many times it doesn't conflict with design principles and in fact might be the best solution. Therefore when I am implementing an interface, I typically don't think deeply enough at each method to decide if a final keyword would be useful. Especially since I work on a lot of business applications that change frequently.


Why is it used so rarely in the wild?

Because it should not be necessary. It also does not fully close down the implementation, so in effect it might give you a false sense of security.

It should not be necessary due to the Liskov substitution principle. The method has a contract and in a correctly designed inheritance diagram that contract is fullfilled (otherwise it's a bug). Example:

interface Animal {
    void bark();
}

abstract class AbstractAnimal implements Animal{
   final void bark() {
       playSound("whoof.wav"); // you were thinking about a dog, weren't you?
   }
}

class Dog extends AbstractAnimal {
  // ok
}

class Cat extends AbstractAnimal() {
  // oops - no barking allowed!
}

By not allowing a subclass to do the right thing (for it) you might introduce a bug. Or you might require another developer to put an inheritance tree of your Garble interface right beside yours because your final method does not allow it to do what it should do.

The false sense of security is typical of a non-static final method. A static method should not use state from the instance (it cannot). A non-static method probably does. Your final (non-static) method probably does too, but it does not own the instance variables - they can be different than expected. So you add a burden on the developer of the class inheriting form AbstractGarble - to ensure instance fields are in a state expected by your implementation at any point in time. Without giving the developer a way to prepare the state before calling your method as in:

int zblah() {
    prepareState();
    return super.zblah();
}

In my opinion you should not close an implementation in such a fashion unless you have a very good reason. If you document your method contract and provide a junit test you should be able to trust other developers. Using the Junit test they can actually verify the Liskov substitution principle.

As a side note, I do occasionally close a method. Especially if it's on the boundary part of a framework. My method does some bookkeeping and then continues to an abstract method to be implemented by someone else:

final boolean login() {
    bookkeeping();
    return doLogin();
}
abstract boolean doLogin();

That way no-one forgets to do the bookkeeping but they can provide a custom login. Whether you like such a setup is of course up to you :)

0

精彩评论

暂无评论...
验证码 换一张
取 消