There are multiple questions here (about the use if "final"):
Question 1: Does the final keyword in the method parameter declaration make a difference?
A: As other noted, the keyword (for method field declaration, or for local variable) makes no difference for code execution. It is useful for readability and making intent clear. But it is more than useless syntactic sugar: it's most important feature (and the reason you see it used a lot) is that it helps avoid multiple common coding mistakes. Like this one:
Class C {
long importantValue;
C(long importantValue) {
...
importantValue = importantValue;
}
}
The above code contains an often-seen bug which is unfortunately easy to accidentally create and easy to miss. The bug can be fixed either by changing the name of the method variable (which breaks the intuitive reading of the code), or by changing the assignment to say:
this.importantValue = importantValue;
The value of declaring all method variables final (as a discipline) in this case is that it makes it impossible to code the commonly seen bug above, and forces you to choose a solution. Declaring variables (whether parameters or local) also helps avoid other common logic bugs that often pop up over the maintanence lifetime of code, in which variables are changed mid-method in ways that surprise people and contradict the original intent.
Question 2: "It seems like programmers these days are marking all fields and all methods as finals... .. Am i missing something?
A: I think you meant "... all fields OF all method as finals", right? Marking methods final has a very different meaning, and should only be done if the intent is to limit extension and overrides by subclasses. There are valid places to do that, but there is NO reason to do it by default (without that clear design choice for the specific method), as it can limit the code's long term usefulness.
[Implicit, discussed in the thread] Question 3: Does "final" make a performance difference?
A: The answer differs depending on where "final" is used:
- For variables (method parameters and local variables): No, "final" makes no difference to performance whatsoever.
- For methods: No (on virtually all modern JVMs). While you'll find some very old performance tips telling people to declare methods final to reduce their call overhead and improve their chances of being inlined, this advice stopped being true in the late 1990s. All modern JVMs will apply "CHA" (Class Hierarchy Analysis), which is used to *prove* the effective finality of a method when it has exactly one implementor in the currently loaded code. As a result, methods that are not declared as final (but have not been overridden by a subclass) have exactly the same performance behavior as a final method. This includes the ability to use a "static" (as opposed to "virtual") dispatch when calling it, and the ability to safely inline the method in calling code without involving runtime checks or guards. The easiest example for this can be seen with getter and setter methods: they do not need to be declared final, and they are typically inlined such that they translate to a single instruction in the generated code, making the clean encapsulation just as fast as public field access would be, and keeping the code nicely extensible without a performance penalty.
- For static fields: Yes (on virtually all current JVMs). A static final field will be forced to be initialized before code that depends on the class in question is JIT'ed. The knowledge that the static final field will never change again is used by various optimizations to produce much faster code (mostly through constant propagation and resulting dead code elimination). This is highly beneficial and is often used to provide free (in runtime cost) configurability for code behavior (like turning asserts on or off).
- For instance fields: It depends (may become more or less performance-effecting in the mid-term future): The knowledge that an instance field is final can be used to support various very useful optimizations. Most of those relate to the ability to freely move reads of the known-to-be-final field across ordering barriers in a program, which in turn allows the elimination of field reads (including things like hoisting them out of loops), and potential other optimizations. However, since in practice instance fields are often re-written after initial object construction (e.g. in deserialization, and via reflection), optimizations that assume that "final" actually means final are potentially dangerous, and are therefore avoided in most current JVMs. This may change in the future in various ways: On the one hand, runtime work to "prove" (or safely speculate and recover when wrong) that final fields are actually final can help expose those optimizations, which would make declared final fields provide better performance. On the other hand, the same sort of logic could be used to prove the "effective finality" of fields that are not declared final (much like CHA does for methods), in which case explicit final declarations will not provide a performance difference.