OK, on to fields:
(Yes, they're non-null by default, with the same set of special cases that apply elsewhere.)
Fields are likely a case in which different codebases (or even different parts of the same codebase) would be especially interested in different defaults. Some classes have fields that are entirely optional or that have deferred initialization (the latter of which may or may not be a good case for @Nullable, depending on who you ask :)). Other code has entirely final, non-nullable fields. And other code falls close to either end but with a few places where @Nullable or @NonNull could patch things up.
The arguments against field-specific defaults are similar to the arguments against a 3-way default in general. (Another one that I forgot is that it's arguably useful for "@NonNull" to be a mild smell (at least in @NullMarked code, since it's very rarely necessary there).) In particular, as you note, it's likely that nullable fields are going to be associated with nullable setters and getters. There would be something at least a little surprising about a field `private Map<String, @Nullable Object>` and a corresponding getter `public @Nullable Map<String, @Nullable Object>`, for example.
As I understand it, this is the direction that languages have generally gone when adding nullness to the language. (JSR-305 was unusual in its support for different defaults in different locations, though I admit that the Checker Framework has similar features, too.) Now, it's a bit easier to swallow with a language feature, which typically lets you write just `List<String?>?` instead of `@Nullable List<@Nullable String>`. Still, there are benefits to moving in the direction that languages do, whether you're interoperating with such a language (Kotlin), you're optimistic that Java itself will have nullness built in someday, or you're just familiar with other languages.
As for the documentation of the behavior of fields:
Yes, this sounds like something that the User Guide and Javadoc could be more direct about. They make reference to the idea that "references" or "type usages" are treated the same in (almost) all locations, but they don't put as much emphasis on it as they could, nor do they fully define their terms. (Another place to cover this would be in documentation that explains how JSpecify differs from other nullness libraries/tools that users might be familiar with.)
On the handling of unmarked code:
This is something that I've always thought was cool about the JSpecify model but that I've never been sure anyone else will care about :) The JSpecify model lets tools treat unmarked/default/"unspecified" nullness leniently, strictly, or anywhere in between.
Lenient behavior is what we see with Kotlin: You can dereference the result of a call to an unannotated method (as if "unspecified" meant "non-nullable"), and you can pass null to it (as if "unspecified" meant "nullable").
Strict behavior is the opposite: You can't dereference anything until you've proven that it's non-nullable, and you can't pass null to anything.
It even goes beyond that: It's not merely that unannotated return types are "nullable": If that were the case, then if you were to override one of them (like by implementing List.subList if it were unannotated), you'd be allowed to return null. Of course, subList shouldn't ever return null. So a strict tool shouldn't allow that. And indeed, the JSpecify model, by saying that the return type has unspecified nullness, lets you forbid this, whereas a model that made the return type nullable would let it slide.
For all the interest I have in this, I'm not sure how many users will opt in to such strictness. But it's a part of our model, and I've tried it on a couple base packages just to identify dependencies that we don't have nullness information for yet. Maybe we'll see tools apply heuristics in this area, treating some unannotated APIs differently from others?
For information about third-party libraries and the JDK:
It seems likely that the solution is going to be partial files in Java source format (similar to what the Checker Framework uses) but with a way to preprocess them into a form that is more performant at runtime (like NullAway has). For the JDK specifically, we've been working with the Checker Framework's stubs, and most of Google's 2023 nullness work went toward writing those annotations into a copy of the JDK APIs themselves, which we then pointed javac and kotlinc at (
details).
On a JDK feature:
Yes, that is clearly the best option from my perspective. We've been in contact with OpenJDK to do what we can to keep that possibility open, including when Valhalla touches on nullness. The more that we're able to work out now, the less we leave for them to figure out later, and the better the chance that we converge on answers that would work for both JSpecify and a hypothetical future JDK feature.
And finally (or did I miss anything?):
It is pretty weird that "org.jspecify:jspecify:0.3.0" does not appear anywhere, nor the module name ("org.jspecify"). We should fix that.