Ep 208 and the road to java.next

2 views
Skip to first unread message

Reinier Zwitserloot

unread,
Oct 9, 2008, 9:07:04 PM10/9/08
to The Java Posse
Amongst the -many- great things said in 208, I really liked the wake-
up call given during the discussion about why core libraries tend to
be written in vanilla java. As much as many of us are rooting for
scala and other improvements to a foothold, I now believe it won't
really happen.

I'm now committed to see if my toy plan for java.next can actually
become something. I'll quickly explain it:


There are a number of boiler-platey java snippets which you could
easily translate to a hypothetical cleaner, more readable format. You
can translate from this newer format to vanilla java, and back,
without having to resort to 'magick' comments or finicky patterns
which have to be followed exactly. For example, imagine writing a tool
that looks at java 1.4 style code, and translates array and list
iteration to a foreach loop. It's not hard to image this, and I doubt
you'll run into the problems you have with GUI generators (where any
change to the underlying 'vanilla' java tends to confuse the GUI
generator so much that it can no longer make sense of the java code).
It's just too straightforward.

I can come up with lots of useful java language additions that are
similarly translatable. This isn't exhaustive, but, a few examples:

- CICE. CICE in particular is an exact boilerplate to shorter format
translation. The following:

new Thread(new Runnable() { public void execute() { /* code */});

becomes:

new Thread(Runnable() { /* code */});

which translates forwards and backwards, no problem.

- 'properties' - a way to annotate or otherwise mark private fields
so that vanilla get, set, and optionally, addPropertyListener, is
generated. The only question in translating back and forth is where
the getter and setter methods should show up in the code
(alphabetically sorted, inserted right below the field? grouped?).
It's not quite the full blown properties that Joe wants, but its a
step in the right direction.

- list and map literals.

- default parameter values.

- map foreach (for (K key, V value : someMap) { /* code */ } instead
of using Map.Entry directly ).

- infer type for local final variables (just 'final foo = "hello"; is
valid).


I'd like to make plugins for netbeans and eclipse which let them
transparently support these features. The key factor is that, on disk,
ONLY vanilla java is stored. When you read in the vanilla java, the
code is automatically scanned for patterns and converted (in memory
only) to the newer syntax, and then, when you save the file, it's
converted back to vanilla java again. This means that boilerplatey
code written by anyone, even those not aware of the existence of
java.next / these plugins, gets converted. You get the benefits of the
new language even for your 'old' code - there's NO need to refactor
your code, there's no need to mess with your tool chain, and there's
no need to reach a consensus before turning on this plugin when
working in a bigger team. No one would notice unless you told them.

In order for this to work, I was planning on building the following
parts:

1. a modified javac that understands all the new features and knows
how to generate the right bytecode. This is needed because you at
least need to parse the new format, and if you can't compile it
directly, you get screwed up line numbers for debugging, which is no
good. Probably stored as a branch off of javac trunk.

2. A converter which converts a javac generated AST to the eclipse
AST model. The eclipse parser pretty much sucks, and from my limited
experience, javac is now better at error recovery (making sense, in as
far as possible, of code with syntax errors in it) than eclipse's 1985
Ada parser based thing, so this seems like an easier route than
hacking ecj.

3. Two ast-to-ast converters; one to convert vanilla java to new
format java, and one to convert new format java back to vanilla java.
I'm looking at ripping jackpot back out of netbeans' refactoring
engine and using that (it's an AST to AST converter, after all). This
would involve updating jackpot so it knows about new java's syntax.

4. A plugin, for both eclipse and netbeans, which replaces the parser
and compiler with the modified javac (in eclipse's case, the modified
javac + the javac AST to eclipse AST converter), and where needed adds
quickfixes and other polish. Due to the design of at least eclipse,
this can't be released as a traditional plugin as it has to replace
the java-specific plugins which are pretty deeply entrenched;
therefore,

5. an installer which backs up the existing java plugins, then
patches them with the various additions, or, alternatively, a
repackaged complete IDE that's been patched. The latter is easier but
forces the user into a specific version of eclipse/netbeans.


As you can see, this is complicated. If anyone wants to help out,
email me. If you have a lot of experience hacking netbeans/eclipse in
particular, I'd love to hear from you. I'm not too sure I can do those
bits without spending /a lot/ of time researching.

Viktor Klang

unread,
Oct 10, 2008, 3:44:48 AM10/10/08
to java...@googlegroups.com
Hello Renier,


Why not:

Thread({/*Code*/}) ?

The need for the "new" keyword is gone.
The compiler knows the constructor args and can infer the type (in this case Runnable)
The compiler knows that Runnable is a one-method interface, so it allows the closure block.
 


which translates forwards and backwards, no problem.

 - 'properties' - a way to annotate or otherwise mark private fields
so that vanilla get, set, and optionally, addPropertyListener, is
generated. The only question in translating back and forth is where
the getter and setter methods should show up in the code
(alphabetically sorted, inserted right below the field? grouped?).
It's not quite the full blown properties that Joe wants, but its a
step in the right direction.

getters and setters á la Java are simply 100% boilerplate.
compare:

case class Person(var firstname, var surname)


to

public class Person
{
    private String firstName;
    private String surName;

   public final void setFirstName(final String firstName)
   {
          this.firstName = firstName;
   }

   public final String getFirstName()
   {
          return this.firstName;
   }

   public final void setSurName(final String surName)
   {
         this.surName = surName;
   }

   public final String getSurName()
   {
          return this.surName;
   }

   //Also, I didn't bother to write equals, hashCode and toString, but add another 20 lines of code below here for that
}
 


 - list and map literals.

...and remove arrays from the language... (they are a primitive optimization construct, the language should only expose ArrayList, and then the VM can use arrays under the hood)
 


 - default parameter values.

Praise the Lord!
 

 - map foreach (for (K key, V value : someMap) { /* code */ } instead
of using Map.Entry directly ).

or like: map.foreach( (K key, V value) { /*Code*/} )
 


 - infer type for local final variables (just 'final foo = "hello"; is
valid).

Or simply make "final" the default.
 



--
Viktor Klang
Senior Systems Analyst

Reinier Zwitserloot

unread,
Oct 10, 2008, 9:17:05 AM10/10/08
to The Java Posse
Answers inline.

On Oct 10, 9:44 am, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
>
> Why not:
>
> Thread({/*Code*/}) ?

Because that's not CICE. The concept of 'auto-boxing' a lone codeblock
is enticing, but you at the very least need 'long returns/breaks' (the
concept where a return, break, or continue statement inside a closure
is capable of escaping the closure. That's part of BGGA but not CICE,
and you get issues, like, for example, for a thread that makes no
sense, you can only use that if the closure is discarded after the
command you fed it to is done), and you probably need some sort of
shorthand syntax for feeding parameters into the codeblock. The above
so happens to work because Runnable's one method has zero parameters.

CICE is easy to translate back and forth for all possible usage
scenarios, without having to add -any- extra libraries. This is
utterly impossible with BGGA; you need the Throwable-based classes to
support long returns/breaks, the Function classes to support
structural typing, and more.

>
> The need for the "new" keyword is gone.

CICE is a significant improvement to syntax. Cutting the 'new' off of
constructor calls is a minor change that doesn't seem to improve
readability much, primarily relying on the convention that classes are
capitalized to make it obvious that its a constructor call instead of
a normal call. You also have to drop the split in constructor call /
method call that is currently part of the AST generation process,
which is a big step that this project will not be taking any time
soon. So, yeah, part of this effort is thinking it through, not just
for the syntax, but for the side-effects of how it would translate and
how complicated the javac patch is going to be.

> > which translates forwards and backwards, no problem.

Problems: long returns/breaks, lack of 'new' is enforced; you can't
have two different equally valid translations, so whatever syntax you
go with, has to be the -only- syntax.

>
> case class Person(var firstname, var surname)

As I said, the list wasn't complete. scala-style case classes are part
of the deal, and will also auto-generate equals and hashCode, a
constructor, getters, and if you like, setters (though I'm edging
towards just allowing immutables. Keeps it simple, and immutables are
good). That doesn't remove the need to have getters and perhaps
setters for classes that fundamentally aren't value based. For
example, properties are popular in GUI classes (see Cocoa, which is a
big part of Joe's tireless campaigning for them), and it would be
really strange to build, say, a 'Slider' widget class with a case
class system. Yet, they most very very definitely have properties. The
value of the slider has to be a complete property, with getters,
setters, and add/removeValueChangeListener.

>
> ...and remove arrays from the language... (they are a primitive optimization
> construct, the language should only expose ArrayList, and then the VM can
> use arrays under the hood)

Arrays will indeed be removed from the language, so that [] can be
recycled without making the parsing step more complicated than the
current infrastructure can handle (again, the infrastructure of the
parser, such as its LL(1) nature, will _NOT_ be changed, at least not
for v1.0, because that makes everything far too complicated). You can
still use arrays (I have to translate mention of arrays in a vanilla
java source file to something, not to mention that they still have
their uses; a byte[1024] is ~ 1032 bytes in memory, a List<Byte> with
1024 entries is ~16448 bytes in memory (on 64-bit, divide by half,
roughly, on 32 bit), so, no arrays can't go away until the compiler is
smart enough to store a List<Byte!> as a raw byte stream in memory,
where '!' is a shorthand for 'cannot be null' - a feature that might
make it in, especially if the JSR that adds @NonNull to java core
libraries goes through). I'll probably make 'Array' (with a capital) a
keyword, and have some sort of syntax to use it to create arrays and
declare array types. Yes, shamelessly ripped from scala.

>
>
>
> >  - default parameter values.
>
> Praise the Lord!

It's not as thorough as e.g. python, but it should be enough,
especially if you do the coding in java.next, instead of relying on
automatic translation of vanilla java.

>
> or like: map.foreach( (K key, V value) { /*Code*/} )
>

That's not how far java.next can reasonably go, assuming that is
syntactic sugar for calling:

public void foreach(closure {K key, V value => void}).

1) Adding methods cannot be done in a way that is sanely translatable
unless I hardcode the list of extra methods, along with the patterns
in vanilla java.

2) The type of syntax sugar going on for translating
(formalParameterList) { codeblock}) into a method is considerable.
Definitely not one of the patterns that'll go in anytime soon, as its
rather contenious. There are plenty of changes that everyone wants,
but for whatever reason Sun isn't planning to add anytime soon.

If you are suggesting that the above syntax is just the way the map
foreach pattern works, then, I don't agree; it LOOKS like foreach is a
method, while it really isn't and that's just how the syntax is
written. Not to mention that it would be a major hassle to make the
parser understand this without making foreach a keyword, which kinda
defeats the purpose.

3) Clearly it would have to be forEach :)

> >  - infer type for local final variables (just 'final foo = "hello"; is
> > valid).
>
> Or simply make "final" the default.

No, java is not ready for that change. Try it, go program using only
'final', everywhere. You'll go nuts. Java needs more basic glue that
is designed to work with immutables properly before you can make that
kind of change.

You're also entirely missing the point of this exercise, so I'll
explain that in the next comment.

Reinier Zwitserloot

unread,
Oct 10, 2008, 9:24:59 AM10/10/08
to The Java Posse
The point of the java.next exercise is definitely not to come up with
an entirely new language that fundamentally likes different paradigms
and can hardly be called 'a lot like java' with a straight face.

Scala is right there.

The point is to add simple, minor additions (with a big impact!),
making it easier to program java using the modern understanding of
nice java code: lists/maps instead of arrays, more immutables where
you can, not all immutables, all the time, getters, setters, and
propertyChangeListeners, for each instead of looping on a counter,
more type info to the typing system, such as generics, and builders
instead of enormous parameter lists. Those kinds of minor but
important things that people are already used to!

There's plenty of low hanging fruit here, and that's what this project
intends to pick. Once you have a version out there that handles the
low hanging fruit, you can first evangelize it, getting it widely
accepted (unlike all those other contenders, see ep #208), simply
because no one has to change ANYTHING: Not their tool chain, not their
IDE, not their internal style and acceptable languages rules, not the
minds of their coworkers, and not the existing source code base, and
not even your own sense of how java programs are supposed to be
programmed. The way I see it, this thing should spread like wildfire
if done right.

THEN, perhaps, you can use it as a basis to go into uncharted
territory and mess with the fundamentals. I'd say that its actually
bad to spend too much time thinking about that. I think its obvious to
say that you can do amazing things here, but, first, we need a proof
of concept, we need a brand name that existing java programmers know,
and more importantly, -trust-, as being so much simpler than e.g.
switching to scala, and we need a big user base. If you run before you
can walk, you won't get anybody except language nuts, and in that area
you have fierce competition (Scala). Given that Odersky is way smarter
than I am, I don't see the point of going that route.

Jess Holle

unread,
Oct 10, 2008, 1:05:04 PM10/10/08
to java...@googlegroups.com
Reinier Zwitserloot wrote:
Answers inline.

On Oct 10, 9:44 am, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
  
Why not:

Thread({/*Code*/}) ?
    
Because that's not CICE.
One thing that came up in the context of BGGA that's really separate from the larger proposal in many respects is the concept of exception transparency and being able to genericize some code without wrapping and unwrapping exceptions all over but letting those thrown by the blocks/closures/callbacks propagate as is.  It's been a while but in a way this struck me as "throws clause inference" and overall was really helpful compared to today's world in this area.

 - infer type for local final variables (just 'final foo = "hello"; is
valid).
      
Frankly I really dislike the idea of inferring types in this context, but I guess there are those who like this.  I'm all for addressing all the cases where one has to respecify types that are obvious on the right side of the equals, though.

--
Jess Holle

Reinier Zwitserloot

unread,
Oct 10, 2008, 1:50:04 PM10/10/08
to The Java Posse
In regards to the annoyance of explicit checked exceptions and
closures:

oh, yeah - sure. Checked exceptions are annoying in the context of
closures. Scratch that: Checked exceptions are annoying generally. The
theory behind it is definitely sound, and proper usage of them would
have been great, but 90% of all java libraries, including java core,
does not use checked/unchecked exceptions correctly. We should get rid
of them. But that's not for java.next to decide (at least not v1.0) -
AGAIN, that's far too much of a change. That's a step that something
on the scale of Scala should take. So, you're not getting the point of
this exercise.

Also: How could I possibly rewrite a closure system which ignores
checked exceptions into reasonable vanilla java? I don't see how.
Remember, in 9 out of 10 cases, you're implementing something or
other, and you may not throw any more exceptions than have already
been listed, so silently writing all checked exceptions in the
'throws' clause won't work. The CICE syntax already allows (or can
easily be modified to allow if the 'official' CICE spec doesn't say
you can do this) implied throws clause (simply copied from the one
method of the interface/abstract class you're implementing).

I could at some point introduce a very simple utility class which uses
a hack to silently throw the exception, and have the plugin
transparently write this into the local package-space so as not to
actually require adding libraries, but that's adding a whole new level
of bad hackery on top of a simple system and thus not a place I want
to go to for v1.0. Really - this kind of thing should be reserved for
when java.next is sufficiently accepted that a lot of code shops
configure their plugin not to translate back to vanilla java; that
way, the compiler can directly ignore the checked exception (for those
that don't know, checked exceptions are purely a figment of javac; at
the JVM (class file) level, all exceptions are unchecked). And this
project will not be going there until its picked up lots of momentum
already.

> Frankly I really dislike the idea of inferring types in this context,
> but I guess there are those who like this.  I'm all for addressing all
> the cases where one has to respecify types that are obvious on the right
> side of the equals, though.

Remember, this inference only works for final variables. Don't
consider the variable a real variable with a type; instead consider it
like a macro. Here, this is legal java;

String foo = someMethodCall();
System.out.println(foo);

okay, so, here we can clearly see that the type of 'someMethodCall' is
(assignment compatible with) a String. However, the following is
equally legal java:

System.out.println(someMethodCall());

Except we just lost our typing info. See; you're complaining about
something that java ALREADY 'suffers' from (I use the term loosely,
because as the above example should show, its not that much of a
loss).

By allowing inference only for final stuff, you're essentially saying:
Here, resolve this expression, and lets name this thing so I can
quickly refer to it multiple times, later. Don't consider it: "here's
a variable (of type T), and I'll store this value in it, and, oh, I'll
declare here that I'll never change the value again". Perhaps it seems
like a semantically small difference, but that kind of reasoning
convinced me that inference for final variables is a great idea (I was
actually moderately against it before).

Also, don't forget that an IDE can figure all this out for you;
floating over 'someMethodCall()' in a good IDE should tell you that
its a String. Similarly, floating over the variable name should do the
same thing in a java.next aware IDE. This is part of the polish I was
talking about, adding stuff like this. I know there's a common
argument that 'code should be readable even without an IDE', but,
frankly, I think that's bull; java code fundamentally isn't very
readable without tools to help you navigate the quagmire. With tools,
java is unbeaten in how easy it is to navigate. Play to your
strengths, don't try to cover a weakness that realisitically isn't
going away.

Jess Holle

unread,
Oct 10, 2008, 3:17:41 PM10/10/08
to java...@googlegroups.com
Reinier Zwitserloot wrote:
In regards to the annoyance of explicit checked exceptions and
closures:

oh, yeah - sure. Checked exceptions are annoying in the context of
closures. Scratch that: Checked exceptions are annoying generally. The
theory behind it is definitely sound, and proper usage of them would
have been great, but 90% of all java libraries, including java core,
does not use checked/unchecked exceptions correctly. We should get rid
of them. But that's not for java.next to decide (at least not v1.0) -
AGAIN, that's far too much of a change. That's a step that something
on the scale of Scala should take. So, you're not getting the point of
this exercise.
  
I may have missed the point of the exercise in this case :-)

I'd just really like exception transparency more than most anything else from BGGA -- or more than the rest of closures actually.  Unfortunately I forget the syntactical details of BGGA in this regard, but essentially this is like having a generic signature like:
public <T,E>  T  call() throws E;
where E can represent 0 or more throwable types.  Allowing this to propagate up a generic stack so that you can just pass something that throws E to a body of code that throws F and express that this body of code will then throw E and F would make generic code more reusable.

Apart from this sort of issue I like checked exceptions :-)
Frankly I really dislike the idea of inferring types in this context,
but I guess there are those who like this.  I'm all for addressing all
the cases where one has to respecify types that are obvious on the right
side of the equals, though.
    
Also, don't forget that an IDE can figure all this out for you;
floating over 'someMethodCall()' in a good IDE should tell you that
its a String.Similarly, floating over the variable name should do the
same thing in a java.next aware IDE. This is part of the polish I was
talking about, adding stuff like this. I know there's a common
argument that 'code should be readable even without an IDE', but,
frankly, I think that's bull; java code fundamentally isn't very
readable without tools to help you navigate the quagmire. With tools,
java is unbeaten in how easy it is to navigate. Play to your
strengths, don't try to cover a weakness that realisitically isn't
going away.
  
Yes.  What I don't like about this is that:
  1. Someone changes the return type of someMethodCall() and the meaning of the code then changes without anyone being the wiser, whereas an explicit type on the left side of the equals is an assertion of the expected type of the result.
  2. Code snippets should be expressive.  I shouldn't need to load the source file and all its compilation dependencies into an IDE to make some sense of the code.  Yes, they'll be more readable with an IDE -- I love my IDE for such reasons, but left-side-of-equals type inference makes code substantially less readable without an IDE and a fully configured environment therein.  That's not a good thing.
Yes, I know these downsides also apply if you do:
System.out.println( someMethodCall() );
but that only impacts the single line -- and once they call someMethodCall() twice most folk are smart enough to stick it in a local variable instead of calling it repeatedly.  Allowing
final foo = someMethodCall();
encourages these downsides to be propagated throughout dozens and dozens of usages of 'foo' and thus has a more corrosive effect.

--
Jess Holle

Reinier Zwitserloot

unread,
Oct 10, 2008, 8:10:31 PM10/10/08
to The Java Posse
This is an important issue, because there's no waffling on this.
Because things get translated back and forth, in java.next, if this
final inference thing is in, it would be IMPOSSIBLE to write:

final String foo = someMethodCall();

after all, on save this gets stored in the .java file as per the
above, and then on load, the pattern detector will eliminate the
'String' part when it realizes that the return type of the call is the
same as the type of the variable. Other than magic comments (Which as
I mentioned, are out, at least for this sort of thing), there's no way
to store this on disk in a way to differentiate.

So, there you have it. Even type inference is contentious. If this
project gets that far, I'll look into only inferring when the right
hand side is a literal or a constructor, e.g:

final foo = "x";

final bar = new Whatever();

Jess Holle

unread,
Oct 10, 2008, 8:33:26 PM10/10/08
to java...@googlegroups.com
Reinier Zwitserloot wrote:
This is an important issue, because there's no waffling on this.
Because things get translated back and forth, in java.next, if this
final inference thing is in, it would be IMPOSSIBLE to write:

final String foo = someMethodCall();

after all, on save this gets stored in the .java file as per the
above, and then on load, the pattern detector will eliminate the
'String' part when it realizes that the return type of the call is the
same as the type of the variable. Other than magic comments (Which as
I mentioned, are out, at least for this sort of thing), there's no way
to store this on disk in a way to differentiate.
  
Given that I use 'final' everywhere I can this transformation would make it very hard for me to read my own code.

Viktor Klang

unread,
Oct 11, 2008, 6:11:40 AM10/11/08
to java...@googlegroups.com
Reinier,

my point (although somewhat between the lines) was that doing things suboptimally is bad in the long run.

Software development is my passion and profession, and I ain't gonna touch Java as soon as people wll start to pay me to code in languages that I actually enjoy using (read: Scala).

I'm not talking about Java Joe, I'm talking star developers leaving the sinking Java-ship, and putting some lipstick on the pig ain't gonna have them staying onboard.

Sorry for the rough attitude, but the Java boilerplate is bad for one's mental health.

Cheers,
Viktor

Mark Derricutt

unread,
Oct 11, 2008, 6:14:21 AM10/11/08
to java...@googlegroups.com
Speaking of star developers leaving, I see Stanly Ho has just left Sun and passed over the spec lead for JSR 277 (Super packages), this was posted to the JSR 277 EG list this morning:

Hi JSR 277 experts,

This is to let you know that today is my last day at Sun. It has been a great pleasure to work with all of you, and I really appreciate the contributions all of you have made in this JSR.

Alex Buckley will take over the responsibility for JSR 277, and serve as the sole spec lead. I have been working with Alex to make sure the transition is smooth. If you have any questions or concerns about the JSR, please contact alex.b...@sun.com directly.

Going forwards, I'm moving to Research In Motion. I'm reachable on Linkedin. Hopefully, many of you will stay in touch and our paths will cross again.

Mark


On Sat, Oct 11, 2008 at 11:11 PM, Viktor Klang <viktor...@gmail.com> wrote:
I'm not talking about Java Joe, I'm talking star developers leaving the sinking Java-ship, and putting some lipstick on the pig ain't gonna have them staying onboard.


--
"It is easier to optimize correct code than to correct optimized code." -- Bill Harlan

Reinier Zwitserloot

unread,
Oct 11, 2008, 9:22:46 AM10/11/08
to The Java Posse
That kind of attitude is not useful to this project, unfortunately.
Your insinuation that e.g. inference of everything is clearly superior
to java will for example instantly lose Jess Holle; he'll never join
your side until you've become very mainstream. A lot of language
nuts / scala converts forget this point.

You also seem to misunderstand the concept of 'gradual change', and
you missed an earlier point I made: Scala is NOT going to become a
popular language. It's too much at once for the mainstream, so they
wont go to it. Java.next might at some future date be more like Scala
than real Java, but it'll get there in baby steps, gaining converts
every step of the way. It might help to explain a key benefit of this
scheme:

Because everything is stored in vanilla java, you can actually make
breaking changes between versions without actually breaking anything;
you update your plugins, and the next time you open source files,
voila, you all of a sudden have a slightly different syntax for some
boilerplate or other. If, at some point, java.next is popular enough
that the 'convert to vanilla java' step is eliminated in a significant
number of code shops, you still don't lose the benefit. Every
java.next file on disk has a source flag so that future versions
(with, maybe, breaking changes in it) can apply the same read-and-
rewrite trick to update all code as it gets read in.

Back to the topic of Scala for a moment: I really don't think it'll
happen, but if you have faith, you may want to join the project. The
compiler's recovery needs a heck of a lot of work before scala is
ready for the mainstream. Right now, scalac will pretty much bite your
head off if you make one typo. Also, those people who immediately
flock to the immutable collections and make the natural assumption
that they can safely use these in multi-threaded apps put these in
production and then see their project fail utterly, because Scala's
immutable collections aren't thread safe. There's plenty of nagging
problems there for you to look at and fix - assuming you want the
world to change and allow Scala to become mainstream.

On Oct 11, 12:11 pm, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
> Reinier,
>
> my point (although somewhat between the lines) was that doing things
> suboptimally is bad in the long run.
>
> Software development is my passion and profession, and I ain't gonna touch
> Java as soon as people wll start to pay me to code in languages that I
> actually enjoy using (read: Scala).
>
> I'm not talking about Java Joe, I'm talking star developers leaving the
> sinking Java-ship, and putting some lipstick on the pig ain't gonna have
> them staying onboard.
>
> Sorry for the rough attitude, but the Java boilerplate is bad for one's
> mental health.
>
> Cheers,
> Viktor
>

Viktor Klang

unread,
Oct 12, 2008, 9:04:44 AM10/12/08
to java...@googlegroups.com
On Sat, Oct 11, 2008 at 3:22 PM, Reinier Zwitserloot <rein...@gmail.com> wrote:

That kind of attitude is not useful to this project, unfortunately.
Your insinuation that e.g. inference of everything is clearly superior
to java will for example instantly lose Jess Holle; he'll never join
your side until you've become very mainstream. A lot of language
nuts / scala converts forget this point.

Ok, I'll stop negging the java.next idea and actually hop aboard here for a second.
 
What we're talking about here is to seamlessly get rid of some of the crappy language design of Java, using source rewriting (bytecode rewriting as well?) to update existing Java code to newer, not-so-boilerplatey java.next code.

Everything's good so far.

And I agree that there are lots of low-hanging fruit to pick here.

Here are my suggestions:

1) Remove checked exceptions
    Why: Loiters the code without actually adding anything
2) Add type aliases
    Why: To avoid repeating yourself
3) Add Scala case class (autogeneration of hashCode, equals and toString)
    Why: Removes about 90% of all javabean code
4) Add non-nullability without the retarded annotations (<typename>? = type or null, not nullable being default)
5) Add the possibility to have method definitions in interfaces
    Why: this makes it possible to add methods to interfaces without breaking everything apart
6) Add CICE
    Why: Because anonymous inner classes as closures is 80% boilerplate
7) Add tailcalls to the JVM
    Why: Because torturing the stack is not kind
8) Remove PermGenSpace and place the classes into the regular heap
    Why: Because closures add alot to permGenSpace, and unloading classes is not a bad idea
9) Add message-based concurrency (actor based perhaps?)
    Why: Because empirically, very few programmers are competent enough to write shared-memory-based concurrent software.)


Sorry for the negative attitude earlier, I'm just kind of pissed off at Sun for letting the language deteriorate into slush.

Cheers,
Viktor

Jess Holle

unread,
Oct 12, 2008, 9:20:15 AM10/12/08
to java...@googlegroups.com
Viktor Klang wrote:
Here are my suggestions:
I'm not sure how feasible all of these are via source re-writing -- I'll honestly say I just didn't ponder that question long for each of these.  Some strike me as less than amenable to this approach (e.g. #8 in particular).

Of these I have to say I'd personally only get excited about #5.  Multiple inheritance of method implementation via interfaces would be really helpful in cases.  [I think you end up needing non-public methods in interfaces to pull this off well, though -- as you need something to provide unvarnished get/set access to the implementing class' properties and you almost certainly don't want this public.  You want the "varnished" get/set access to be public, i.e. that with validation whose default algorithm is specified by the interface, etc.]

Actually I don't really buy #8 at all -- classes can be unloaded today.  There are restrictions on when, but changing those vs. working with them is a JVM spec change, not a source rewrite or even JVM implementation issue.

--
Jess Holle

Reinier Zwitserloot

unread,
Oct 13, 2008, 5:13:02 AM10/13/08
to The Java Posse
Okay, most of those aren't low hanging fruit. I've spent the past week
working out specifically how to translate from source to source and
its just too difficult for many of the changes I had in mind. CICE,
rewiring == to use .equals, and value classes are really the only big
ones that 'work'.

I'll explain the insane complexity of non-nullness just as a single
example to show how extraordinarily difficult being compatible with
java (even on the class file level) is:

non-nullness seems like a simple thing to add. For arguments sake,
lets say that a standard type is nullable, and a suffix ! indicates
never null. (I know the prevailing attitude is that non-null should be
the default, and I sort of agree with this, but its easier to explain
the difficulty this way, so bear with me. Its easy to see that if you
can make '!' work, then going the extra step and making that the
default, and '?' as a marker to indicate nullity, is trivial, so it
doesn't really matter which one of the two gets proved to be nearly
impossible to implement).

So, here's the ! marker in action:

List!<String!> list = new ArrayList<String!>();
list.add(null); //compiler error; list contains 'String!', so null
is not allowed.
list = null; //compiler error; list is of type List!, so may not be
null.
String! foo = list.get(0); //okay
String bar = list.get(0); //okay; returns a String! but that is
assign-compatible with String.
String x ="foo";
list.add(x); //not okay. x might be null.
list.add((!)x); //okay. (!) is shorthand for 'throw NPE if
expression is null'.

Because the ! suffixes are everywhere, including in method return
types, you won't be using (!) all that much. Also, compiler analysis
will be smart enough to recognize 'if ( foo == null ) { /* code */ }
else { /* code where 'foo' is treated as non-null */ }. This is great
for e.g.:

Map!<Integer!, String!> map = {1 -> "one", 2 -> "two"}; //pseudocode
List!<String!> list = new ArrayList<String!>();
String! foo = map.get(0); //NOT OKAY - .get() returns null to
indicate not found.
String x = map.get(0); //this is okay.
if ( x == null ) { sysout("not found"); } else { list.add(x); } //
okay - compiler analysis.

Bit more complicated, but we can make this work.

Now we get into the nasty bits: generics.

List<String!> list = new ArrayList<String!>(); //okay
List<String> list2 = list; //is this okay?
list2.add(null);
String! x = list.get(0);
//Ah. So, that's NOT OKAY. But then...
List<String> a = new ArrayList<String!>(); //ALSO NOT OKAY
List<String!> b = new ArrayList<String>(); //NOT OKAY EITHER

In the above code snippet, 'x' would contain null, eventhough its of
type 'String!'. Between obvious speed issues and a massive set of
existing libraries that can't just be recompiled, its not possible to
runtime-check everything, so the type system needs to ensure
correctness. This is just like generics, which also isn't runtime
checked. And so we have the same issues. Just like List<Number> c =
new ArrayList<Integer>(); is NOT LEGAL in current java, List<String> d
= new ArrayList<String!>() can't legal either, because you can use it
to break the forced non-nullity.

Assigning a List<String> to a variable of List<String!> is also not
okay, for perhaps more obvious reasons.

Just like generics, if we somehow had a guarantee that we're only
going to read from the list, you COULD assign a List<String!> to a
variable of type List<String>; the fact that we have to do null-checks
on items retrieved from the list eventhough they aren't ever null is
not a problem. Similarly, if we somehow knew that we're only going to
write into the list, then assigning a List<String> to a List<String!>
variable is okay; we'd be forced to only write non-null into the list,
which is of course always okay, regardless of whether the list allows
nulls or not. This is like generics '? extends String' and '? super
String'. You can read Strings from the first (but not write them), and
you can write strings (but not read them) into the second. Co/
contravariance, in other words. And so, non-nullness has the same
complexity as generics. We need a way to say that we don't know the
forced non-nullity status of a generics bound. Let's make up a syntax
for this:

List<String?> list = new ArrayList<String!>();

The ? doesn't indicate 'might be null', but instead indicates 'might
or might not be forced non-null'. As a result, when adding to this
list, you must pass in String!, and when reading, you get String (so,
adding stuff must not be null, but when reading you must be prepared
for nulls). Just like with ? extends String, when reading you get a
String, but you can't write to it. Imagine we used non-null as a
default and '?' as a marker that it might be null, what symbol do you
use to indicate this now? Interrobang? Something like List<?!> list?
We'll get back to that; lets continue with 'null is default, !
indicates non-null, ? indicates unknown state' for a moment.

We need the ? for return types as well. ArrayList can of course hold
null if it wants to, but its get() method returns null only if its
generics parameter is a nullable type. However, for the V part of a
Map, this isn't true; its get method can always return null. For
another class, lets say a hypothetical NullMarkedLinkedList, you MUST
always pass in non-null types, so it would make sense to throw a
compiler error right at the construction phase and disallow: new
NullMarkedLinkedList<String>(); - only allow passing <String!>. We
need ways to mark this. Let's go with:

public class NullMarkedLinkedList<T!> implements List<T> {}

The T! there indicates that the generics bound MUST be non-null.

and for the map class which needs to explain that its get method can
always return null:

public V? get(K key) {}

This means: Regardless of the allows-null status of 'V', this method
can always return null.

Okay, so far so good, sort of. It's already miles more complicated
than any of you originally thought. I was certainly bummed when I got
this far. But we're by no means done, unfortunately.

Lets now add full generics in the mix.

List<? extends Number> list = new ArrayList<Double!>();

That's seems illegal, for the same reason List<String> = new
ArrayList<String!>() is not legal. Except that this one IS legal,
because, due to the 'extends', we've already limited ourselves to only
reading from 'list' (try it; you can't add things to that list unless
you do an unsafe generics cast which produces a generics warning).
However, this is even more complicated than generics: List<T> =
someList<T!> is illegal, but List<? extends T> = someList<T!> isn't.
Whoa.

But we need a way to indicate that 'list' is still definitely non-
null, so that when we read from it, we get 'Number!' out, instead of
'Number'. Let's use the interrobang for this one:

List<?! extends Number> list = new ArrayList<Double!>();

Unfortunately the ?! syntax looks a bit like cartoon swearing but it
seems like the most sensical thing. Putting the ! on Number seems
nicer, but that doesn't really make any sense. The Number is just a
bound, its not a complete type. The actual thing which isn't going to
be null is the type itself, not the bound, which in this case is
represented by the ?.

Ugh. We now have:

T - no non-null bound
T! - definitely not null
T? - may or may not be forced non-null
?! - nameless generics variable with forced non-null

and we're still not done - there's 'super' as well:

List<? super Integer> list = new ArrayList<Number!>(); //not okay
List<?! super Integer> list = new ArrayList<Number!>(); //okay
List<?! super Integer> list = new ArrayList<Number>(); //NOT OKAY!

Now the reverse happens; the ?! notation will accept either forced non-
null or not-forced-non-null because due to the 'super' we've already
limited ourselves to just writing things into the list. ?! forces us
to write non-null Integers, which is obviously also okay to do to a
list that contains nullable Numbers (a non-null Integer is obviously a
Number which may or may not be null). Unfortunately, unlike the '?
extends Number' case, where it's impossible to write to the list, you
*CAN* still read from this list. You'll just get Object back, which is
annoying, because Object also has nullable/non-nullable status. Either
the List<?! super Integer> returns "Object" for its get method,
instead of "Object!", which seems completely screwed up, and makes it
impossible to pass non-nullity status to stuff with generics 'super'
bounds (as even the ! doesn't actually say: Never null). So, we need
yet another syntax, the double question mark:

List<?? super Integer> list = someList();
Object x = list.get(0); //okay
list.add(null); //not okay
list.add(10); //okay

I think we're done, but at this point I'm cleaning my exploded brains
off the walls, so who knows if this is where the insanity ends. (I
checked what IDEA does, but IDEA just doesn't allow the annotations
they use inside generics bounds, which seems like making the entire
null checking stuff pointless. I'd ask IDEA people how useful the very
limited nullability checking is, but I'm afraid of fanboyism, so,
please be elaborate and use examples)


Even if you compiled straight to class files a la scala, you can't fix
this one without making life very very difficult for yourself.



By the way, viktor, you're still not getting it.

For example, you can't add tailcalls to the JVM without *CHANGING THE
JVM*. Not even scala goes this far!

Here's your scoreboard (source-source is source rewriting, source-
class is compiling directly to class files, which wasn't the idea, and
JVM-change means changing the JVM, which is completely off the board).
duplicity test = because of the translation back and forth, only 1
version is allowed. So with type aliases, for example, if you aliased
'StringList' to 'List<String>', then anytime you typed 'List<String>',
the source would forcibly be changed. This would get annoying, I bet.

1) Remove checked exceptions: FAIL: source-class
2) Add type aliases: PASS: source-source, but duplicity concern.
3) add value classes: PASS: source-source
4) add non-null: FAIL: complexity (see above)
5) add methods to interfaces: If you meant s tatic methods, PASS:
source-source (stuff methods into a $Methods inner class). If you
meant traits, FAIL: extensive source-class also of code that
implements it. (note how you can't implement scala traits in java
code, for example)
6) add CICE: PASS. source-source.
7) tailcails. FAIL: JVM rewrite
8) permgenspace goes away: FAIL: JVM rewrite
9) message-based concurrency: N/A: That's library stuff; the only
thing you can do is add closures and a few other primitives so you can
write for it in a sane way, which 'add CICE' mostly covers.

Jess Holle

unread,
Oct 13, 2008, 1:07:33 PM10/13/08
to java...@googlegroups.com
Scala has lots of cool feature bullets, but it seems to have gone out of its way to prevent mainstream acceptance by choosing a syntax so far removed from any statically typed language with mainstream acceptance.  I'm not talking about Java Joe's either here -- but rather anyone not so absolutely desperate for the features in question that they mind reading something that has as much in common with any other statically typed mainstream languages as Korean has to English.  Okay, Scala has a bit more in common -- it uses the same alphabet.

Scala seems like a nice academic and/or niche experiment, but it seems designed not to be confused with anything mainstream.

Clearly as another JVM-based language, Scala is (1) a compiler that produces byte code and (2) a set of supporting runtime libraries (also in byte code).  What Reinier is talking about is focusing on (1) with a goal to convert to/from standard Java sources automatically.  One can clearly take much larger source strides if one throws out this goal, e.g. simply by simply extending javac and possibly the NetBeans parser (which is javac-based, right?) to cover any additional features one wants.  You can only go so far without adding (2) as well, for some cases, though.

All this omits (3) changing the JVM itself, which is much harder than (1) or (2) for most folk and also severely limits acceptance due to the requirement of a special JVM binary.  Gafter did this for BGGA, but you'll notice Scala, Ruby, Jython, Fortress, etc, all avoid this dimension of change -- they suggest changes they'd like but work around the lack thereof for now so as to be more accepted tools for now.

--
Jess Holle

Viktor Klang

unread,
Oct 14, 2008, 3:52:36 AM10/14/08
to java...@googlegroups.com
Actually, I think non-nullness could be simplified if you change the way you view it.

if ! indicates the type <a string value>
and ? indicates the type <Either<String!,null>>

Clearly,

1) casting a ! to a ? is not possible, a type conversion needs to take place
2) casting a ? to a ! is not possible, a checked unboxing needs to take place (assuring that only the left path of the either is taken for the assignment)

Can this simplify the non-nullness problem?


Translating all reference == to .equals is a good idea. (reference equality is horrid)

I agree that JVM changes are freaking people out, and Sun is being childish in their efforts to slow JVM development down to a grinding halt.

Reinier Zwitserloot

unread,
Oct 14, 2008, 8:54:38 AM10/14/08
to The Java Posse
Introducing ! as definitely never null, and ? as might be null, in an
Either/Maybe kind of construct, doesn't really help, unless you are
willing to throw out the ability to make a list/map/anything else with
generics in it which will take arbitrary type input; it doesn't matter
if the type you're passing in is of the 'never null' or of the 'maybe
null' or of the 'I don't know' variety. Throwing that out seems like a
death sentence for the entire feature; the point is ease of use, and
getting rid of pointless null checks, not conflagrating the issue by
adding a bunch of neccessary null checks just to satisfy the compiler.

In other words, you need to be able to say:

List< AAAAA > list;

where AAAA is something that says: "Strings, but the nullable or non-
nullable nature of them is irrelevant; I'll only be reading from them,
and I'll do explicit null checks when I do, so I just don't care, and/
or when I write to it, I'll definitely write non-null."

You also need: I do care; only one particular variant of null-allowed
is okay here.

This is actually one less scenario than generics (which splits up the
first scenario into: I'll only be writing into it, so I'll give an
type range going from Object to the thing I'll be putting in, say,
Integer, and you can then feed it a List of Object, Number, or
Integer, it'll all be good, and into a second scenario: I'll only be
reading from it, so I'll give a type range going from, say, Number, up
to whatever, and I'll only be reading Numbers out, so if you give me a
Number, or Integer, or Double, it'll be all good. In our case we can
combine these two situations into stating that: When reading, you get
possibly-null out, and when writing, you must never write nulls in.


I think your suggestion to use ? to indicate 'definitely allowed to be
null' and use the lack of a marker as 'we don't know if null is
allowed here or not' might make the concept a lot simpler to grok
(though it would force you to add a marker to every type you state in
your entire codebase to do it 'right', which I envision would get
annoying fast). You'd then have:

! = Definiitely not null.
? = Definitely allowed to be null.
(nothing) = We don't know, so to be safe, when reading, you get ?, but
when writing, you must write in ! - that can't go wrong.

So, you'd get:

List!<String> listA; // must only add non-nulls, but when reading
elements out, must check for null.
List!<String?> listB; //when reading, you must still check for null,
but you're allowed to write null into it as well.
List!<String!> listC; //must only add non-nulls, and when reading, you
get String!s out, so no need for null checks.

listA = listB; //okay
listA = listC; //also okay. Hey! That's nice!
listB = listA; //not okay
listC = listA; //also not okay
listB = listC; //duh - not okay

Of course, if your generics type has an unnamed type with a bound, you
still get the funky ?! and ?? syntax, and you also still get the extra
leniency:

List!<? extends Number> listA; //can't add, but when reading, must
check for null.
List!<?? extends Number> listB; //can't add, but when reading, must
check for null. Same as listA.
List!<?! extends Number> listC; //can't add but when reading, no need
for null-check.

listA = listB; //okay
listB = listA; //okay - contrast to above, where it wasn't.
listA = listC; //okay
listB = listC; //okay - contrast to above, where it wasn't.
listC = either; //not okay.

And the most complicated case, super:

List!<? super Integer> listA; //write Integer!, read Object?
List!<?? super Integer> listB; //write Integer!/Integer/Integer?, read
Object?
List!<?! super Integer> listC; //write Integer!, read Object!

note that listA and listC aren't quite the same, due to the fact that
you can read from ? super X bounds, whereas you can't write to ?
extends Y bounds.

assignment compatibility: same as in the String! vs. String? vs.
String case.


Grokking this might be easier, and being compatible with existing java
code is way easier (no marker = dunno), but there is the issue of !
and ? showing up *EVERYWHERE*:

public Integer? combineHashes(List?<?!> list) {
if ( list == null ) return null;
Integer! hash = 0;
for ( Object! x : list ) hash ^= x;
return hash;
}

Does that look okay to you? If it does, this might be worth the
trouble.


On Oct 14, 9:52 am, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
> Actually, I think non-nullness could be simplified if you change the way you
> view it.
>
> if ! indicates the type <a string value>
> and ? indicates the type <Either<String!,null>>
>
> Clearly,
>
> 1) casting a ! to a ? is not possible, a type conversion needs to take place
> 2) casting a ? to a ! is not possible, a checked unboxing needs to take
> place (assuring that only the left path of the either is taken for the
> assignment)
>
> Can this simplify the non-nullness problem?
>
> Translating all reference == to .equals is a good idea. (reference equality
> is horrid)
>
> I agree that JVM changes are freaking people out, and Sun is being childish
> in their efforts to slow JVM development down to a grinding halt.
>
> ...
>
> read more »

Viktor Klang

unread,
Oct 14, 2008, 10:49:49 AM10/14/08
to java...@googlegroups.com
I don't like the possibility to not specify nullness.

Let's say that ? is implicit

List<String> = List?<String?> = (a list with Strings or null) or null

List!<String!> = a list with strings

List<String!> = (a list with Strings) or null

List!<? extends String> = a list with (subtypes of string or null)
List!<?! extends String> = a list with subtypes of string
List!<? super String> = a list with (supertypes of String or null)
List!<?! super String> = a list with supertypes of String

List?<? extends String> a = ...
List!<? extends String> b = ...

a = b is legal (since they have the same generics signature)
b = a is not legal since a can be null

I haven't really thought this through, I'm just exploring the possibilities.

/Viktor

Reinier Zwitserloot

unread,
Oct 15, 2008, 8:11:57 AM10/15/08
to The Java Posse
If you can't specify "I don't care about nullness", it would be -
impossible-, given the following code:

List<X> list = someMethodThatReturnsAStringList();
list.add("foo");

in such a way that the method can return either a list of strings-or-
nulls, or a list of definitely-not-null-strings.

Eventhough, given the only thing it does is add a non-null string, it
doesn't matter.

I'm pretty much positive not being able to do that makes this feature
very stupid.


On Oct 14, 4:49 pm, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
> I don't like the possibility to not specify nullness.
>
> Let's say that ? is implicit
>
> List<String> = List?<String?> = (a list with Strings or null) or null
>
> List!<String!> = a list with strings
>
> List<String!> = (a list with Strings) or null
>
> List!<? extends String> = a list with (subtypes of string or null)
> List!<?! extends String> = a list with subtypes of string
> List!<? super String> = a list with (supertypes of String or null)
> List!<?! super String> = a list with supertypes of String
>
> List?<? extends String> a = ...
> List!<? extends String> b = ...
>
> a = b is legal (since they have the same generics signature)
> b = a is not legal since a can be null
>
> I haven't really thought this through, I'm just exploring the possibilities.
>
> /Viktor
>
> ...
>
> read more »

Viktor Klang

unread,
Oct 15, 2008, 9:04:44 AM10/15/08
to java...@googlegroups.com
On Wed, Oct 15, 2008 at 2:11 PM, Reinier Zwitserloot <rein...@gmail.com> wrote:

If you can't specify "I don't care about nullness", it would be -
impossible-, given the following code:

List<X> list = someMethodThatReturnsAStringList();
list.add("foo");

This expands to:

List?<X?> list = someMethodThatReturnsAStringList();
list.add("foo");

Whis is exactly (I don't care wether list is null or String, and if it contains Xs or null) what you said.
 


in such a way that the method can return either a list of strings-or-
nulls, or a list of definitely-not-null-strings.

Eventhough, given the only thing it does is add a non-null string, it
doesn't matter.

I'm pretty much positive not being able to do that makes this feature
very stupid.

Reinier, that's an opinion, and not an argument.

I'd like to continue this discussion, so it'd be nice if more people join in with ideas, questions, etc.
 

Reinier Zwitserloot

unread,
Oct 16, 2008, 3:21:14 PM10/16/08
to The Java Posse
On Oct 15, 3:04 pm, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
> This expands to:
>
> List?<X?> list = someMethodThatReturnsAStringList();
> list.add("foo");

No; it doesn't. The above means that 'list.add(null);' should be
legal. This is a meaningful property of a type, but it does come with
caveats. Specifically, if the 'someMethodThatReturnsAStringList'
method returns a list that does not allow nulls, this should be a
compile-time error. There's really no way that I can see to get around
the notion that you need 3 different properties in generics
parameters:

A. Definitely allows null, (benefit: Can write nulls in)
B. Definitely does not allow null (benefit: No need to null-check when
reading)
C. Don't know / Don't care if its allowed (benefit: Accept anything)

Because the whole point, in a sense, is to eliminate null checks as
much as possible, it seems fundamentally bad to ADD situations where
you need to perform null checks (which is what you'd do if you tried
to e.g. combine B and C, for example). that's what I meant.

Now, 3 types might be acceptable, but syntax is definitely an issue.
ESPECIALLY if non-null becomes the default; there are two different
more nully options, so you need 'nully' and 'even more nully'; using
'?' and '??' seems kinda wrong, and the ? is used a lot more in java
than the !, which makes it hard to use ? or even ??. I'd say it's
pretty bad to let parser architecture dictate syntax, but its
undeniable that suffix-! for non-null, suffix-? for most definitely
allows null, and -nothing- to indicate 'don't know/care' is easiest on
the parser and thus easiest for tools out in the wild to adapt this
change, and easiest for the tutorials. It's not easiest on the eyes
once everyone's used to it, which is a pretty big problem, but for now
it's the front-runner for me.
> ...
>
> read more »

Viktor Klang

unread,
Oct 17, 2008, 3:09:57 AM10/17/08
to java...@googlegroups.com
On Thu, Oct 16, 2008 at 9:21 PM, Reinier Zwitserloot <rein...@gmail.com> wrote:

On Oct 15, 3:04 pm, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
> This expands to:
>
> List?<X?> list = someMethodThatReturnsAStringList();
> list.add("foo");

No; it doesn't. The above means that 'list.add(null);' should be
legal. This is a meaningful property of a type, but it does come with
caveats. Specifically, if the 'someMethodThatReturnsAStringList'
method returns a list that does not allow nulls, this should be a
compile-time error.

Of course it should.
 
There's really no way that I can see to get around
the notion that you need 3 different properties in generics
parameters:

A. Definitely allows null, (benefit: Can write nulls in)
B. Definitely does not allow null (benefit: No need to null-check when
reading)
 

C. Don't know / Don't care if its allowed (benefit: Accept anything)

I don't think that's an option, that kind of thinking was what got us into this mess in the first place.
A type is a type is a type.

String? is another type than String!

Not caring about types should place you in the dynamic languages section.
Either you design the code to handle nulls, or you don't. It's binary.
 

Reinier Zwitserloot

unread,
Oct 18, 2008, 3:50:42 PM10/18/08
to The Java Posse
You can't enforce one of the two with no way to go for either. That's
just not feasible. Think about it; it would be impossible to write
even the simplest utility methods.

Just like in generics, where you can specify: Don't care what it is
('T extends Object', it should be possible to say: Don't care which
type of nullable we have).

Unlike generics, where the amount of things you can do with a random
'T' is fairly limited, for nullity there's loads you can do. The
number of methods that never write null but do null checks when
reading / don't actually read are virtually limitless. And yet all of
those would have to pick a side and allow null or not, eventhough IT
DOESNT MATTER to them. When the type system gets in the way of you not
repeating yourself, the type system completely failed.


On Oct 17, 9:09 am, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
> ...
>
> read more »

Viktor Klang

unread,
Oct 19, 2008, 6:12:12 AM10/19/08
to java...@googlegroups.com
But think about it Reinier,

type! can be assigned to a type?

ie.

Example no 1:

   String! ns = "";
   String? s : somecondition ? ns : null;

We also for generics have:

?? = something or null
?!  = something

Since we cannot allow to cast:

List!<String!> to List!<String?> (for obvious reasons)

What we can do though is to convert non-genericised types (as the example no1)
by this:

List!<?! extends String> foo = ...
List?<?! extends String> bar = ...

bar = foo //this is ok

But!

Since we know that bar.get(0) returns non-null instances of subtypes of String, we can do this:

String? baz = bar.get(0) //returns non-null instance of subtype of String, and we can assign that value to a String?

So you can write methods like this:

public <T! extends String> doSomethingGeneric(final List!<T> thelist)
{
    return thelist.get(0); //Completely nullsafe
}

Ye,s since you will not be able to cast a parametricized type to another parametricized type with changed nollness constraints, you would need some additions to the standard libraries: wrapper-types that implement view-semantics (can be lazy) that provides read-only access.

Reinier Zwitserloot

unread,
Oct 20, 2008, 6:17:04 AM10/20/08
to The Java Posse
Now try this for generics 'super' and you run into your first problem.
Then try without any bounds (just a flat type, not a T extends/super
Type).
> ...
>
> read more »

Viktor Klang

unread,
Oct 20, 2008, 7:16:34 AM10/20/08
to java...@googlegroups.com
Reinier,

Ok, <?? super String> == Something that is a (supertype of String) or null
or <?! super String> == something that is a supertype of String

Is this what you meant?

Reinier Zwitserloot

unread,
Oct 20, 2008, 10:23:23 PM10/20/08
to The Java Posse
Yes, but, now try to work with these. As you said before, '?? extends
Foo' is general purpose: It's compatible with ?! extends Foo..

However, neither ?! super Foo, nor ?? super Foo, are general
purpose. ?! cannot be assigned to ?? because, obviously, ?? would
allow you to add nulls to this list.

However, ?? cannot be assigned to ?! either. Okay, sure, when adding
things this works out fine (forced to add non-null, so that's great),
but, you can still read from these things; you just get Object back.
However, in the case of ?! super Foo, it would be really -really-
strange if "Object?" instead of "Object!" rolled out.

Now try this with no bounds, just a type, e.g "List<String>". Without
all three modifiers (non-null, definitely null, unknown), you just
can't cover all reasonable use cases.

On Oct 20, 1:16 pm, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
> Reinier,
>
> Ok, <?? super String> == Something that is a (supertype of String) or null
> or <?! super String> == something that is a supertype of String
>
> Is this what you meant?
>
> ...
>
> read more »

Viktor Klang

unread,
Oct 21, 2008, 7:55:43 AM10/21/08
to java...@googlegroups.com
On Tue, Oct 21, 2008 at 4:23 AM, Reinier Zwitserloot <rein...@gmail.com> wrote:

Yes, but, now try to work with these. As you said before, '?? extends
Foo' is general purpose: It's compatible with ?! extends Foo..

However, neither ?! super Foo, nor ?? super Foo, are general
purpose. ?! cannot be assigned to ?? because, obviously, ?? would
allow you to add nulls to this list.

Are you talking about List?<?! super Foo> vs. List?<?? super Foo>?
If so, no, you cannot promote List?<?? super Foo> to List?<?! super Foo>, but really, I think it needs to be evaluated with a context.
If many of the ugly if-checks are eliminated, and many NPEs are avoided, I would be rather surprised if LoC wouldn't drop drastically (even though the "problem" mentioned above)
 


However, ?? cannot be assigned to ?! either. Okay, sure, when adding
things this works out fine (forced to add non-null, so that's great),
but, you can still read from these things; you just get Object back.
However, in the case of ?! super Foo, it would be really -really-
strange if "Object?" instead of "Object!" rolled out.

List!<?! super Foo>.get(0) would yield either an OOBE or Foo! (and Foo! can be autocasted to Foo?)
 


Now try this with no bounds, just a type, e.g "List<String>". Without
all three modifiers (non-null, definitely null, unknown), you just
can't cover all reasonable use cases.

List<String> is syntactic sugar for List?<String?> :)

So, if we boil it down, what are the benefits and drawbacks?

 

Reinier Zwitserloot

unread,
Oct 22, 2008, 11:15:24 PM10/22/08
to The Java Posse
Inline...

On Oct 21, 1:55 pm, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
>
> If many of the ugly if-checks are eliminated, and many NPEs are avoided, I
> would be rather surprised if LoC wouldn't drop drastically (even though the
> "problem" mentioned above)

That's the goal, yes, but, look at the backwash against generics. It
has to be pretty perfect.

>
> List!<?! super Foo>.get(0) would yield either an OOBE or Foo! (and Foo! can
> be autocasted to Foo?)

No, that makes no sense. <? extends Foo> would yield Foo or Foo! ; ?
super Foo 'yields' Object.
You can *ADD* "Foo" to it. Assuming you meanet that List<?! super
Foo>.get(0) yields an OOBE or Object!, then...

we have a problem. If we do that, then <?! super Foo> is not capable
of holding a <?? super Foo>. Contrast this to extends, where we've
proven that:

<?? extends Foo> is capable of holding a <?! extends Foo> without any
issues.

We can solve the problem by letting the rarely used get()-style
operation (return type of T) in combination with <?! super Foo> return
T? and not T!, but that makes very little sense; why would ?! return
non-! - that's kind of weird.

>
>
>
> > Now try this with no bounds, just a type, e.g "List<String>". Without
> > all three modifiers (non-null, definitely null, unknown), you just
> > can't cover all reasonable use cases.
>
> List<String> is syntactic sugar for List?<String?> :)

No - try writing a non-particular generics bound (non-particular in
the sense of nullness). You've got 3 options here, and they are all
vastly different and incompatible:

String! - can not be null (advantage: you can read out without null-
checks).
String? - can be null (advantage: you can write nulls into this thing)
String- - can be either, you don't know (advantage: You can accept
either String! or String?).

Again, not having that third option sounds to me like a recipe for
utter disaster. It's not only generally annoying, but in light of the
fact that legacy code will lead to lots of situations where types have
the wrong type of nullity, it seems extremely important to be able to
write flexible methods that accept either.

>
> So, if we boil it down, what are the benefits and drawbacks?
>

It's good - no doubt about it. Just like generics was good - no doubt
about it. The drawback is, that it's just as complicated as generics,
with just as much funky syntax. The one advantage, due to limited
types (specifically, only three types: non-null, allows-null, unknown-
null), is that I foresee far fewer 'puzzlers'.

Viktor Klang

unread,
Oct 23, 2008, 3:12:54 AM10/23/08
to java...@googlegroups.com
On Thu, Oct 23, 2008 at 5:15 AM, Reinier Zwitserloot <rein...@gmail.com> wrote:

Inline...

On Oct 21, 1:55 pm, "Viktor Klang" <viktor.kl...@gmail.com> wrote:
>
> If many of the ugly if-checks are eliminated, and many NPEs are avoided, I
> would be rather surprised if LoC wouldn't drop drastically (even though the
> "problem" mentioned above)

That's the goal, yes, but, look at the backwash against generics. It
has to be pretty perfect.

>
> List!<?! super Foo>.get(0) would yield either an OOBE or Foo! (and Foo! can
> be autocasted to Foo?)

No, that makes no sense. <? extends Foo> would yield Foo or Foo! ; ?
super Foo 'yields' Object.

Sorry, was retarded when I wrote that.

<?! super Foo> means something that is a supertype of Foo and never null.
 

You can *ADD* "Foo" to it. Assuming you meanet that List<?! super
Foo>.get(0) yields an OOBE or Object!, then...

we have a problem. If we do that, then <?! super Foo> is not capable
of holding a <?? super Foo>. Contrast this to extends, where we've
proven that:

<?? extends Foo> is capable of holding a <?! extends Foo> without any
issues.

We can solve the problem by letting the rarely used get()-style
operation (return type of T) in combination with <?! super Foo> return
T? and not T!, but that makes very little sense; why would ?! return
non-! - that's kind of weird.

This makes no sense, Why should it return T? (T? = T or null)
 


>
>
>
> > Now try this with no bounds, just a type, e.g "List<String>". Without
> > all three modifiers (non-null, definitely null, unknown), you just
> > can't cover all reasonable use cases.
>
> List<String> is syntactic sugar for List?<String?> :)

No - try writing a non-particular generics bound (non-particular in
the sense of nullness). You've got 3 options here, and they are all
vastly different and incompatible:

String! - can not be null (advantage: you can read out without null-
checks).
String? - can be null (advantage: you can write nulls into this thing)
String- - can be either, you don't know (advantage: You can accept
either String! or String?).

But that was what I told you, if you do not care about nullness, then you use <type>? since <type>! can be promoted to <type>?
 


Again, not having that third option sounds to me like a recipe for
utter disaster. It's not only generally annoying, but in light of the
fact that legacy code will lead to lots of situations where types have
the wrong type of nullity, it seems extremely important to be able to
write flexible methods that accept either.

>
> So, if we boil it down, what are the benefits and drawbacks?
>

It's good - no doubt about it. Just like generics was good - no doubt
about it. The drawback is, that it's just as complicated as generics,
with just as much funky syntax. The one advantage, due to limited
types (specifically, only three types: non-null, allows-null, unknown-
null), is that I foresee far fewer 'puzzlers'.

For me, there are still only 2 different types ;)
 



Reinier Zwitserloot

unread,
Oct 23, 2008, 10:48:29 AM10/23/08
to The Java Posse
Okay, here's a simple example that should clear this all up:

Case scenario: I have a method that accepts a list of Strings, using
each string in the list to fill a table row. Our model actually
supports 'sparse' rows, so, nulls in the input list indicate that that
particular column doesn't contain anything. Most users don't use the
sparse row facility, but a few do. This is our method in vanilla java
land:

public void setRowValues(List<String>) {
//do stuff
}

so far, so good. Now we want to add nullity to this. In our case. We
clearly allow nulls, so we write:

public void setRowValues(List!<String?> values) {
//to clarify: A non-null list which contains only Strings or 'null'.
}

Great, so, now, I'm the caller, and I have a list of strings ready to
pass to this method. For whatever reason, my list of strings is of the
List<String!> variety; Let's say I'm not using the sparse row facility
and my code isn't capable of handling them, so its actually great that
I can use List!<String!> to get compile time checking that I don't
acidentally go there.

so I write:

List!<String!> myValues = Arrays.asList("foo", "bar", "baz"); //for
arguments sake, lets say Arrays.asList returns List!<String!> here.
row.setRowValues(myValues);

... and I get a compiler error when I compile it: List<String!> is not
assignment compatible to a List<String?>. (If it would have been,
then, what happens when our setRowValues adds a value to the list? It
can clearly do: "values.add(null);" )

This bewilders me to no end. I give up, and curse at the author of
this library.


That's clearly not an acceptable scenario. So far you haven't shown me
anything about avoiding that scenario.



NB: You can sneakily fix it here by using <? extends String>, even
though that looks like a stupid bound, and will trigger IDE warnings,
as 'String' is final. However, imagine that we actually do add things
to this list, but we don't add 'null', regardless of input. Then this
silly trick no longer works.

On Oct 23, 9:12 am, "Viktor Klang" <viktor.kl...@gmail.com> wrote:

Viktor Klang

unread,
Oct 24, 2008, 5:39:13 AM10/24/08
to java...@googlegroups.com

I see two viable solutions:

1) You don't accept a List, but you accept an Iterable!<String?>
    And then we add either as an extention method or as an utility:
   public final static <T> Iterable<T?> view(final Iterable<T!> src)
   {
     return new Iterable<T?>() {

            @Override
            public Iterator<T?> iterator()
            {
                final Iterator<T!> it = src.iterator();
                return new Iterator<T?>(){

                    @Override
                    public boolean hasNext()
                    {
                        return it.hasNext();
                    }

                    @Override
                    public T? next()
                    {
                        return it.next(); //Conversion from T! to T? is legal
                    }

                    @Override
                    public void remove()
                    {
                        throw new UnsupportedOperationException();
                    }};
            }};
       }

The same solution is possible to apply for Lists etc.



Peter Becker

unread,
Oct 24, 2008, 7:46:46 AM10/24/08
to java...@googlegroups.com
Aren't you making the point that we need immutable collection interfaces ;-)

The problem you point out doesn't seem specific to the String!/String?
relationship. You get the same problem with any subtyping relationship
already.

I must admit I didn't study your previous discussion about the ??/?!
in all detail (due to time constraints -- sorry), but why wouldn't a
(relatively) simple List<? extends String?> as formal parameter type
do the trick? Since String? allows all values String! allows, String!
should classify as subtype.

Of course that doesn't guarantee the client the safety of not having
nulls, but it seems the author of the method doesn't want to give that
guarantee, which is unfortunate for the client but the author's right
in the type system we discuss.

Admittably explaining all the combinations of <(\?[\?|!]?
[super|extends] )?X[\?|!]?> (untested RegExp, but hopefully good
enough to give the idea) would keep Angelika Langer busy for a while
adding another few dozen pages to her FAQ. And some combinations are
nonsensical, such as requesting a nullable subtype of a non-nullable
type and its dual (both would be the empty set of types, even
excluding the named type). But an approach like the proposed one seems
doable to me.

Peter

--
What happened to Schroedinger's cat? My invisible saddled white dragon ate it.

Reinier Zwitserloot

unread,
Oct 24, 2008, 9:23:29 AM10/24/08
to The Java Posse
Viktor, I mean no offense, but that 'solution' is going to send the
torches and pitchforks to sun. That's not a solution at all. You want
to create a view method for every class that has a generics bound,
everywhere? What about generics in generics? e.g. List<Set<Foo>>?

On Oct 24, 11:39 am, "Viktor Klang" <viktor.kl...@gmail.com> wrote:

Reinier Zwitserloot

unread,
Oct 24, 2008, 9:29:09 AM10/24/08
to The Java Posse
Peter, you may want to read my condensed explanation here:

http://www.zwitserloot.com/2008/10/23/non-null-in-static-languages/

Here's a teaser:

String? is not a supertype of String! when used in generics:

List<String!> foo = new ArrayList<String!>();
List<String?> bar = foo;
bar.add(null); //legal
String! s = foo.get(0); //legal...
//but now a 'String!' contains null, so we failed.

A method should be able to guarantee that it won't screw up your non-
nullity, but that it is perfectly fine with accepting a list with
nulls as well. That's what this is all about: How do you do that? With
just two options (yes, nulls allowed, and, no, nulls not okay),
there's no way to say: I don't care. And yet, that particular use case
is a very common one, if you have a look at the standard libraries.
Many methods don't particularly care about nullity and don't do
dangerous things (specifically: adding nulls, -or-, not checking for
null).

Viktor Klang

unread,
Oct 24, 2008, 9:34:51 AM10/24/08
to java...@googlegroups.com
On Fri, Oct 24, 2008 at 3:23 PM, Reinier Zwitserloot <rein...@gmail.com> wrote:

Viktor, I mean no offense, but that 'solution' is going to send the
torches and pitchforks to sun. That's not a solution at all. You want
to create a view method for every class that has a generics bound,
everywhere? What about generics in generics? e.g. List<Set<Foo>>?

Yes, I agree that mutable collections are bad, and also that null is bad.
Basically, Iterator is a view. And if you're only an observer to content, you shouldn't obtain the possibility to alter the container.
 

Peter Becker

unread,
Oct 24, 2008, 11:41:38 PM10/24/08
to java...@googlegroups.com
Reinier,

I hadn't noticed the blog post before my last post to the list due to
inbox ordering, but I've read it now.

If you have a List<? extends String?> it allows both a List<String?>
and a List<String!> (and the List<String>, too). You couldn't add
nulls to a List<? extends String?>, but you would have to expect them.
It's really the same as with any List<? extends X> once you start
thinking of String! as a subtype of String?, which seems perfectly
valid to me.

More interesting is what the relation between String and String? is:
extensionally they are both the same type, and in some way I prefer
treating them that way, considering the question mark only as
syntactic sugar. Arguing about type differences based only on
intentional difference seems asking for trouble and disagrees
completly with my thinking (which is heavily influenced by Formal
Concept Analysis and other conceptual modelling approaches).

I would always consider two types with the same extention in the
universe of discourse as the same for the purpose of whatever you are
doing. If they do not separate anything in your world, any difference
is irrelevant (others might use the word "academic" here).

So in short my proposal would be:
- introduce "T!" as a notation for non-nullable types
- consider T! as subtype of T for all purposes
- allow "T?" as syntactic variation of "T"

And that should be it. T as well as T? are legacy for me, the whole
problem with the other variants should be solvable using the
mechanisms generics already offers. Admittably it ain't pretty, but
that's Java's generics for you -- adding other features to avoid
generics for those use cases seems to just add to the cruft.

The one drawback you could find in my approach is that with allowing
nulls you would always allow any other subtype, but for me that's only
consistent. In many ways null is an instance of the bottom type, i.e.
of the subtype of all types. The Java compiler sees it that way
already and even if the JVM doesn't and it screws up everything
DbC/LSP it seems sensible to stick with that idea. Once you have
accepted that, you might as well allow anything in the interval
between your T and the bottom if you allow nulls, it seems consistent
to me since I can't see any subtype object being worse than null.

I'm not entirely sure if this is only radical or maybe plainly naive,
but so far this model seems the best to me -- I have been thinking
this way for a while and it still seems the least evil solution (the
only good one being disallowing nulls which unfortunately doesn't seem
feasible).

Peter

Reinier Zwitserloot

unread,
Oct 25, 2008, 10:01:45 AM10/25/08
to The Java Posse
Peter, you have some small problems with your approach:

- non-null is not the default and can't really be due to legacy code,
which is why I added the 'source' thing, but I will submit you can
easily fix your proposal to use the same trick, and
- there are use cases where your proposal makes it impossible to
write that method without resorting to ugly hacks.

However, the big problem with your approach is simply this: It is far
more complicated than mine.

Let's compare notes here. In the interests of fairness, I'm going to
apply the following rule: Just like in my proposal, in yours, there's
a 'source' tag at the top which forces non-null as the default, and ?
is used to mark something as allows null.

Vanilla java 1.5 syntax:

public void listToUpperCase(List<String> list) {
for ( int i = 0 ; i < list.size() ; i++ ) {
String x = list.get(i);
if ( x != null ) list.set(i, x.toUpperCase());
}
}

This method is not a contrived use-case: If we had closures, the above
is effectively the 'map' operation, but performed in place. Given
java.util's focus on mutables, having this, or at the very least being
able to write it, is obviously a crucial use case. I just picked
'toUpperCase' as a random operation, but if it helps make this example
less contrived, think that instead of 'toUpperCase', it's a function
passed in as a second argument, and think that the method name is
'mapInPlace' instead of 'listToUpperCase'. Note also that for the
above method, the nullity of String is not relevant; the code
nullchecks on read, and never writes null in, so it works equally well
with either type. We're going to duplicate the above exactly, in both
your and my proposal.


Here's your version:

@SuppressWarnings("generics")
public void listToUpperCase(List<? extends String?> list) {
List<String?> dummy = (List<String?>)list; //this is where the
Suppress is for.
for ( int i = 0 ; i < list.size() ; i++ ) {
String? x = list.get(i);
if ( x != null ) dummy.set(i, x.toUpperCase());
}
}

Note that, if toUpperCase() actually could return null (and if we go
back to the notion that this could have been 'mapInPlace', that could
certainly happen and is entirely dependent on the method, your version
doesn't generate a compile-time error/warning. Here's my version,
where '*' means 'don't care', and '?' means definitely allows null
(and nothing means: definitely not null):

public void listToUpperCase(List<String*> list) {
for ( int i = 0 ; i < list.size() ; i++ ) {
String? x = list.get(i);
if ( x != null ) list.set(i, x.toUpperCase());
}
}


Much simpler than your version, and far less error prone (due to the
hack, if toUpperCase() could in fact return null, your version would
silently start adding nulls to a non-null list. In my version, the
compiler would refuse to compile it).


Think for a moment about the developer of listToUpperCase. He knows
what he wants to do: He wants to make sure that the parameter is a
List of Strings, but of either nullity. He certainly does not want to
add a second method (with a different name, as due to the way java
signatures work, you can't write both of them with the same name -
nullity is not part of the signature without changing the JVM, which
doesn't happen). He also doesn't want to create or tell his method
users to employ a method that 'wraps' a List<String!> into a
List<String?>, with a runtime null-checking facility which the above
code wouldn't trigger, but does give peace of mind. My assumptions are
that both of those situations are utterly unacceptable and are grounds
to cancel the entire proposal. So, he NEEDS to make that parameter
'work'.

In my proposal, he tosses a * at it, and it just works (in this case).
In other cases, where that really wasn't legal, then he'd get a
compile-time error right at the place which is the problem (for
example, let's say he forgot to add a ? to the String x = list.get(i);
line: The compiler would say: "list.get(i) might return null, cannot
assign to variable of type "String"), whereas in your case, without
the hack, it would complain about a generics problem, namely writing a
string into a list of <? extends String?> which is not legal, which is
a lot more confusing than the error you'd get with my system. Going
from "String" to "String*" if you specifically want to not care about
nullity also seems a lot more logical to me than either
'String?' (fundamentally broken), or '? extends String?'. The big
difference is that with your version you need to grok the whole
concept of 'String!' is a subtype of 'String?'. In mine, you really
don't. All you need to know is:

nothing = definitely not null, '?' = definitely allows null, and '*',
used in generics only, means: I don't care. Other than that, just
follow the compiler's cues.

In your version, you need to grok: nothing = definitely not null, '?'
is definitely allows null, and "FOO" is a subtype of "FOO?". That's
far more complicated.
> ...
>
> read more »

Peter Becker

unread,
Oct 25, 2008, 8:08:56 PM10/25/08
to java...@googlegroups.com
I see your point and I would never go for anything that adds casts.
I'm still stuck with my old C++ mentality that any cast points to
something being wrong and if it is your code you better fix it.

What your use case seems to require is that the user can pass either a
List<String?> or List<String!>, which you could write as (not in
current Java, but in theory):

List<? extends String? super String!>

since those two type are neighbours and thus the interval between them
is what you want. It also would replace some specific hack for
non-nullability with a general and IMO more elegant solution by
allowing the use of both bounds to define closed intervals. Admittably
the notation isn't the nicest, but List<? extends String?, super
String!> is not that good since the comma is already use for other
purposes and any use of parentheses to clarify things seem to have the
opposite effect. List<? in [String?,String!]> would be nice in a way
(with either "*" for top and bottom or "0"/"1"), but that's just not
Java.

When using a List<? extends String? super String!> you'd get out
String? and can put in String!, which seems to be what you want. The
drawback of allowing this construct is that you could use it for other
type intervals -- I believe that's technically ok but I don't think
any use of super is a really good idea so propagating it seems bad.

Another alternative would be to allow multiple dispatch, but that
opens another can of worms. It would allow you to offer both versions
of the method at once, possibly with some kind of monad to avoid
duplicating code (both in terms of implementing the same method twice
as well as having to implement the null-check logic twice). Since we
are really talking pure static typing here (i.e. the JVM doesn't care)
it might be possible to add some hack for distinguishing T?/T! into
the compiler.

I just start to wonder how this whole problem would look if we think
of T? as an Option<T>, but I'm running out of time for now, so I leave
this as exercise for the reader with the option to get back to it
myself later. Seems better than delaying this post.

Cheers,
Peter

Reinier Zwitserloot

unread,
Oct 26, 2008, 1:31:25 PM10/26/08
to The Java Posse
I'm holding the same discussion in two places; on my blog and here,
with the same arguments: * vs. super String! extends String?.

I think that, fundamentally, "*" has two things going for it:

1) super FOO! extends FOO? is so unwieldly, you'll see calls to make
something to reduce this 'boilerplate'.
2) It's just easier to understand; with super/extends, you need to be
intrinsically aware of the relationship between FOO! and FOO?. It's
not -just- about reading this, imagine you're joe schmoe java
developer and you need to figure this out; your mind needs to consider
String! a strict subtype of String? before you could possibly think of
the extends/super solution. I take it as a given that 90% of all java
programmers using the extends/super as a formulaic thing - they know
it works, but not why - is intrinsically a bad thing.

If it helps, * can effectively be a syntax shortcut for 'super/
extends' (though, by having it, there's no need to mod the generics
system to allow both bounds to be set). I can't really think of a
situation where my system acts differently than the notion that
String! is a strict subtype of String? - which, really, is true in
either case. Just stated less obviously in mine - something I will
definitely address if I write this up as a serious proposal.

I don't see multiple dispatch, or special name munging, happening.
That just isn't feasible when the JVM is not allowed to change and old
vs. new code has to interoperate; in order to make it work, the JVM
would have to create dummy methods for every single method you'd ever
write, without the name munging / with the 'allows null' version
(unmunged), just so 'old' code can call 'new' code.


Option<T> is actually very inflexible; there's simply no way to write
a method that accepts either a T or an Option<T> without serious
hackery in the way the type system works. Haskell and other Option/
Maybe languages work because everyone 'unwraps' the Option results at
the first possible opportunity; you hardly ever see a List of
Option<T>s, for example. This isn't feasible in java, for two reasons:
The mountain of legacy code that uses null, the general nature of
imperative programming which tends to come up with 'no value' more
often, and the mountain of legacy thinking - the idea that 'null'
happens will take years and years to dissipate from your average java
programmers brain. The only way to make Option<T> work is with view
translation systems (heh, kind of like scala's def implicit stuff),
which will 'adapt' a List<Option<T>> to a List<T> on an as needed
basis. This seems to be waaaaay more ugly, and does not work for old
code calling new code. Admittedly that use case isn't too important,
but so far java hasn't fundamentally broken that forward compatibility
too much, and its clearly not neccessary to go there, so why do it?

This isn't a judgement call that Option<T> is a worse solution than
nullity; it's a judgement that, given the history of java, Option<T>
is a much worse solution than nullity. Big difference :)
> ...
>
> read more »

Peter Becker

unread,
Oct 26, 2008, 7:30:12 PM10/26/08
to java...@googlegroups.com
On Mon, Oct 27, 2008 at 3:31 AM, Reinier Zwitserloot <rein...@gmail.com> wrote:
>
> I'm holding the same discussion in two places; on my blog and here,
> with the same arguments: * vs. super String! extends String?.
>
> I think that, fundamentally, "*" has two things going for it:
>
> 1) super FOO! extends FOO? is so unwieldly, you'll see calls to make
> something to reduce this 'boilerplate'.
> 2) It's just easier to understand; with super/extends, you need to be
> intrinsically aware of the relationship between FOO! and FOO?. It's
> not -just- about reading this, imagine you're joe schmoe java
> developer and you need to figure this out; your mind needs to consider
> String! a strict subtype of String? before you could possibly think of
> the extends/super solution. I take it as a given that 90% of all java
> programmers using the extends/super as a formulaic thing - they know
> it works, but not why - is intrinsically a bad thing.
>
> If it helps, * can effectively be a syntax shortcut for 'super/
> extends' (though, by having it, there's no need to mod the generics
> system to allow both bounds to be set). I can't really think of a
> situation where my system acts differently than the notion that
> String! is a strict subtype of String? - which, really, is true in
> either case. Just stated less obviously in mine - something I will
> definitely address if I write this up as a serious proposal.

You can see these things either way -- for me adding another dimension
(your "*") into the language has the potential of being harder to
understand than extending the existing ideas. But in this case the
situation is a bit different since Java's generics are already messy,
which means we don't necessarily want to extend anything in that
context.

> I don't see multiple dispatch, or special name munging, happening.
> That just isn't feasible when the JVM is not allowed to change and old
> vs. new code has to interoperate; in order to make it work, the JVM
> would have to create dummy methods for every single method you'd ever
> write, without the name munging / with the 'allows null' version
> (unmunged), just so 'old' code can call 'new' code.

Yes, I think you are probably right. And I'm not sure I want a partial
implementation of multiple dispatch anyway since I tend to prefer the
all-or-nothing approach which avoids having too many cae

> Option<T> is actually very inflexible; there's simply no way to write
> a method that accepts either a T or an Option<T> without serious
> hackery in the way the type system works. Haskell and other Option/
> Maybe languages work because everyone 'unwraps' the Option results at
> the first possible opportunity; you hardly ever see a List of
> Option<T>s, for example. This isn't feasible in java, for two reasons:
> The mountain of legacy code that uses null, the general nature of
> imperative programming which tends to come up with 'no value' more
> often, and the mountain of legacy thinking - the idea that 'null'
> happens will take years and years to dissipate from your average java
> programmers brain. The only way to make Option<T> work is with view
> translation systems (heh, kind of like scala's def implicit stuff),
> which will 'adapt' a List<Option<T>> to a List<T> on an as needed
> basis. This seems to be waaaaay more ugly, and does not work for old
> code calling new code. Admittedly that use case isn't too important,
> but so far java hasn't fundamentally broken that forward compatibility
> too much, and its clearly not neccessary to go there, so why do it?
>
> This isn't a judgement call that Option<T> is a worse solution than
> nullity; it's a judgement that, given the history of java, Option<T>
> is a much worse solution than nullity. Big difference :)

I know of the problems, but to some extent thinking about nullability
as Option/Maybes makes more sense than the subtyping relation.
Subtyping works for Java and similar languages but it doesn't work if
you look at the problem from a perspective of DbC or the LSP. Null is
never really an instance of any decent type since it will break any
contract that's worthwhile writing. Therefore a nullable version of a
type is not a subtype of the original type using this view.

So we have a situation where it makes sense to talk about subtyping
from an extensional perspective, but not from an intentional one. But
of course the intentional side is pretty screwed in Java anyway --
apart from allowing null for types that are not declared nullable (and
thus breaking contracts within each type), the Java space doesn't show
much respect for contracts anyway. If anyone needs proof (I strongly
assume you don't), point them to various equals() or compareTo()
implementations in the JDK.

Since Java already has this legacy of nulls and broken LSP part of me
feels it is ok to continue that and just assume that T! is a subtype
of T? based on purely extensional reasoning. The other part of me
thinks there should be a way so the extensional view is the dual of
the intentional as it should be. Option<T> works for that, but it just
doesn't fit well with the existing mess of Java's type system. Or at
least I haven't found a way to make it fit yet and I'm still hoping it
is just my incompetence :-)

What I wonder is if there is a way to see all the ?/!/*/whatever as
some kind of construction outside the existing type system, with null
not being a value within the type system established by manifest
typing. Option<T> is a way to get that, but maybe it is possible to
think similar to Option<T> without actually using a construct like
that. In many ways the null/non-null/other option is a separate
facet/dimension of the problem and we model it separately in our
discussions. Maybe it would be better to just say there is a type
system defined by interfaces/classes and then we do a cross-product
with the nullability facet, which then results in the type system
that's actually used for the references. That seems to match my
conceptual thinking better and it also fits the way the corresponding
monads work in the modern FP languages. It doesn't fit Java well, but
it still might be better to find a solution in the clean model and
then try to map it into the Java world instead of trying to solve the
problem straight in the mess.

Peter

Reinier Zwitserloot

unread,
Oct 27, 2008, 9:47:16 AM10/27/08
to The Java Posse
As far as I see it, Option<T> thinking is simply this:

A nullable version of FOO is a completely different thing from a non-
nullable version of FOO.

And this is very -very- incompatible with slow language evolution. The
above has all the problems inherent in an explicit Option<T>:

- a list of non-nullables cannot be supplied to a method that wants a
list of nullables.

- there's no way to specify you can work with either without creating
two methods. Even if this was possible without name munging, you're
making something that was easy in java 1.5 a lot harder, and I -
seriously- doubt a JSR with that property will ever pass muster.

- The type system needs to change to reflect that 'null' itself, and/
or a null-containing variable/attribute/parameter, is really an
entirely different type. Given the rules about not changing the JVM
much, and the need to be mostly compatible, I just don't see this
happening.


I have actually been thinking about nullity as a separate typing
system right from the start, which is where the * and ? notation come
from. Those modify the type, they aren't really shorthand for 'any
type in range [String?, String!]', and they do not require explaining
them that way.

On Oct 27, 12:30 am, "Peter Becker" <peter.becker...@gmail.com> wrote:
> ...
>
> read more »

Peter Becker

unread,
Oct 27, 2008, 7:10:15 PM10/27/08
to java...@googlegroups.com
I completely agree with your descriptions here. The problem is that
this is IMO the only correct way of handling nulls (correct in respect
to treating a type system as conceptual hiearchy), but unfortunately
it is far from what Java did in the first place. The result is that
I'm torn: part of me would like to think it right, part of me would
rather just patch the mess.

One thing that might be worthwhile pointing out is that null is
effectively not a part of the object type system (at least not the
declared one), but really of a reference type system that is induced
by the object type system and then tainted by the addition of the null
option. Somehow Java bend that back and made "null" something you can
specify as an object, which is where things go bad. Originally that
construct really comes out of the reference or even pointer world --
after all it is a NullPointerException, not a NullException or
NullReferenceException.

Somehow thinking about the object vs. reference angle doesn't seem to
produce practical results, though -- at least I haven't come up with
anything beyond some explanation of how we came to this mess Java's
type system is.

Peter
Reply all
Reply to author
Forward
0 new messages