Java3 - what would you do?

2 views
Skip to first unread message

Reinier Zwitserloot

unread,
Jan 17, 2008, 6:26:29 AM1/17/08
to The Java Posse
I sometimes hear discussions about what you would do if you could
throw backwards compatibility out the window, or how you'd set up a
java3, etc, etc. Invariably these discussions get bogged down in
pointless minutiae like 'I'd make switch statements more sensical' or
even "I'd make generics reifable". That's not at all interesting.
here's my list of what I'd do better. Note that I firmly believe that
a language that has all of these features will beat the absolute pants
off of scala, which looks like a kindergarten's toy in comparison.

1. A real backwards compatibility preservation system.

This system allows you to completely redesign the language if you
wanted between versions, and old and new code, both source and binary,
will happily coexist and chat with each other without any problems.
For source compatibility, such a construct requires you to specify the
version of your source files (you'd add a "source 1.5" at the top or
some such, or you'd do so at the package or project level), and the
compiler will automatically realize that e.g. in a 1.6 or below
source,array literals create arrays, whereas in 1.7 or up, they
actually create List<T>s instead, just to give an example. On the
binary side, all versions of the runtime ship with both the latest
core libraries and 'diffs' between the latest core library and each
previous version. A 'source 1.5' file would use the 1.5 version of
those libraries, and e.g. a java.util.Date.getHour() method wouldn't
even exist here. You can explicitly use older versions (in case you
need to integrate with old code or some such) using types that carry
the target version (Date{1.4}), and types are capable of carrying
their own conversion code so that an 'old' Date can turn itself into a
'new' Date and vice versa.

ALL language designers screw up at some point or other. So far
languages have either decided to live with em (java), or to reset the
clock every so often (python 3000) and completely break compatibility.
I don't have a complete spec for how to pull this off but I'm sure
it's possible to do this right. The one major beef I have with scala
is the perlish cartoon swearing. There are way too many cutesy
operators in the scala preamble (scala.lang.*), which is stupid,
because scala actually allows you to import anything into your current
lexical scope if you think you really need "/:" as an operator analogy
to foldLeft for some block of code. Absolutely no need to foist that
on everybody. Because there's a book now, taking these mistakes out is
no longer realistic, because scala has no way of tracking target
versions. Whoops!

2. Projects.

A concept of 'projects' (kinda like OSGi/JSR-whatsit), but in a big
way. Each project is internally monolithic (meaning, you compile it in
one go, and post compilation you can't replace or recompile individual
classes, it's all or nothing. Class files no longer exist as such,
there are just jar files), and contains a separate list of 'exported'
calls (ordinary public methods are only public relative to the
project, not anything outside of it). This enables excellent
optimizations (dead code elimination and finding non-exported classes
that can effectively be made final), and is a great boon for tooling
because they make far more assumptions about the code base that way.
Think about it; figuring out code coverage is so much easier if you
have a relatively small set of 'inroads' (all exported stuff + any
main method).

There are issues to be worked out in regards to transporting objects
between projects, but I'm sure a smart solution can be found.

3. Generators.

Imagine for a moment that java allowed you to specify 'literal
builders'. These literal builders would be given a string. They emit
code that can call 'special' (private-ish) constructors, as well as
AST-based errors and warnings (from character Y to character Z, error:
"open parenthesis not matched with a closing one." or some such). For
the regexp system, you would be able to construct them as a literal,
and eclipse/IDEA/NetBeans would tell you on the spot, as you write it,
that your regexp is broken. Contrast this to the current situation
where you need a load of ugly escapes AND you need to catch regexp
compile exceptions eventhough you know full well your regexp is error
free (or you use an alternative method name that signals you know what
you're doing). That stuff is testable at compile time, so it should be
done at compile time (=write time for a sufficiently smart IDE). You
could use this for SQL literals, XML literals, regexp literals, JSON
literals, and many many other forms of literals. (The specialish
constructor comes in handy here so your regexp compiler can compile
the whole thing to a regexp finite state machine, and supply THIS to
the special constructor. This constructor knows the node list is from
the generator and thus doesn't need to double check it for illegal
arguments and the like, it can just accept it as being error free.
Hey, look, we speeded up our code at runtime AND made it more readable
at write time! Win!)

Of all things scala has solved in libraries (using operator
overloading and the like), one of the few things they hardcoded in the
spec is.... support for XML literals. Whoops! Yet another scala
mistake. I love scala to bits, but with some imagination you can
already see the whiners' standard arguments why 'scala is the old and
crappy and we should use this shiny new thing!'. Generators move this
stuff to the Library which even means (see suggestion #1) you can take
some outdated formats out in the future with no fuss, and anyone can
write new ones.

4. Canonical AST.

Right now the various java tools use various means to turn java code
into tokens and from there into code, errors, warnings, 'findbugs'-
like suggestions, javadocs, etcetera, etcetera. Unfortunately this
means there is no standard. Eclipse generates wildly different error
messages than e.g. netbeans, and they all differ slightly in how code
is rendered and the like. By standardizing this and shipping it along
with the VM, you make life easy for tool writers, you standardize
across tools, and you open the door to 'tool manipulators' - think APT
but then in a much bigger way. In such a world, you could plug java5
into an eclipse written before java5 ever existed and it would just
work, including the proper errors, warnings, etcetera. Maybe eclipse
would come out later with an 'add on' with extra quick fixes and more
detailed (find-bugsian) warnings, but that construction is not bound
to the IDE and you could use the netbeans extra set on eclipse and
vice versa. Find Bugs would just be part of your IDE, flagging code as
you type it same as the usual syntax errors. To do this, find bugs
just writes one project to the standard AST model and all IDEs can
just use it, no need for an eclipse plugin, a netbeans plugin, a this
plugin, a that plugin, an ant plugin, etc, etc, etc. Just works.

5. Pre-ambles.

DSLs are nice, but on the other hand they allow you to create some
really weird code. There are no 'puzzlers for ruby', because the book
would be a 5000000 page long horror story; due to the dynamics of
ruby, no single line of code can be relied upon to do anything unless
you know exactly what has gone before at runtime, which is halting
problem hard to do. Scala does a better job at it; its DSL shenanigans
are not runtime-scoped, but lexically scoped. You at least know where
to look for the code that redefines what String.toLowerCase() does,
for example. However, going through all that for every file is still
annoying, as is writing some custom change into every file for a
project. Thus, pre-ambles: Each project (see suggestion #2) has one
place where you can decide to define that 'anyList.sort();' is a
shortcut for "Collections.sort(anyList);" for example. Anyone that
wants to even begin reading your code first has to read the pre-amble,
but once he has, he can read all code in the entire project with a
pretty good idea of how things work. As a bonus, IDEs can easily be
fully aware of each project's pre-amble. Rails (or other DSL heavy
stuff) for java would then basically become a 'include
rails.preamble.java' in your projects preamble.


Anyone else have big ideas?

Peter Becker

unread,
Jan 17, 2008, 7:25:04 AM1/17/08
to java...@googlegroups.com
In rough order of perceived importance (which can vary depending on my mood):

1) My biggest wish extends on your point 1 -- I would like to version Interfaces in general, not just the JDK. For example version 3 of my interface might want to return a List<T> instead of a T[], but I want to keep the same signature. If my clients refer to a specific version of the interface, then this is feasible. It would also make obvious that @deprecated and @since are no proper solution. The version used would be stated in the import. The client of the interface (and in turn library, component, whatever) can then decide when to upgrade as long as I still support older versions (which needs some kind of method of attaching wrappers to be really nice).

2) Replacing the hacked together access control with something proper, such as the "class X implements interface Y for [package P/public/subclasses]" idea.

3) To round things off I would like management of nullable (probably still sticking with nullable as default for a while to ease transition) and possibly disallowing using classes as types ( i.e. no classes as parameter or return types).

4) Union/join types would be cool, too -- instead of having a combination of return a value, null or throwing an exception you could just return "[MyResultType or MyErrorType]". If MyErrorType has only one instance you have the null case but with semantics. At least with generics we have a way to express the meet now (a bit more complicated than necessary, but it works). Note that this could replace checked exceptions (and unchecked ones should be forbidden anyway).

5) Value types (immutable objects with all the optimizations for internalization and serialization) would be a neat one, too.

6) Proper assertions defined on the interfaces, not in the method bodies (starting with a simple expression language is fine by me).

7) In the long term things like units and subtyping by constraining value ranges (a la XSD, e.g. "integer between 0 and 50" or "string matching regexp"). The latter would be very cool for form-based applications, instead of a weak type and a validator you just have the type that knows exactly what is allowed and what not.

8) Signals and slots or some equivalent message-passing / component wiring mechanism. Listeners work, but they are common enough and messy enough to be hidden behind some nice syntactic sugar. Syntax is trivial, but sometimes things go to far :-)

I don't really care about closures at the moment, anonymous inner types on one-method interfaces work for me. Lots of character overhead, but my IDE types it for me and as long as it is formatted consistently it doesn't really slow me down when reading (I think).

Reified generics isn't that important either since I don't like using reflection. There has been the occasional case where I wanted to implement the same generic interface but with different types as parameters, which you can't do at the moment. I'm not sure that really needs reification, though -- and if an interface has a method where the type parameter is the return type and not used otherwise it wouldn't work anyway, so maybe that is just dreaming.

I think most of this is quite doable and if your point 1 is covered it should be quite feasible to introduce these things step by step into the Java language. All at once would be too much of a shock, but I think people might slowly learn to appreciate the extra safety and IDE-goodness a stronger type system gives them.

  Peter

Scot

unread,
Jan 17, 2008, 1:49:57 PM1/17/08
to The Java Posse


On Jan 17, 11:26 am, Reinier Zwitserloot <reini...@gmail.com> wrote:
> I sometimes hear discussions about what you would do if you could
> throw backwards compatibility out the window, or how you'd set up a
> java3, etc, etc. Invariably these discussions get bogged down in
> pointless minutiae like 'I'd make switch statements more sensical' or
> even "I'd make generics reifable". That's not at all interesting.

To me, a decent switch statement (like Groovy's) and reified generics
are interesting. They're not pointless minutiae, they're the sorts of
things that would make day to day coding in "Java++" better than
coding in plain old Java.

> here's my list of what I'd do better. Note that I firmly believe that
> a language that has all of these features will beat the absolute pants
> off of scala, which looks like a kindergarten's toy in comparison.
>

But wouldn't a version of Scala running on VM that supports your
proposed features get those proposed features?


> 1. A real backwards compatibility preservation system.
>

To me, this seems like it will lead to infinite code bloat, with the
runtime constantly growing in size, just to accommodate long
deprecated code. I think reseting the clock every so often, like
python does, is the way forward. @deprecated would actually mean
something. Even better, bring in some form of future imports (again,
like python) so you can start using new language features while you
get your old code up to speed.

If backwards compatibility really is a must, maybe some form of "VM
inside a VM", where old code runs in what is effectively an old VM
with a communications layer that would perform the needed translations
to/from the new code. Even better, compatibility could be an optional
install,so if you don't need backward compatibility you don't have to
install it.

> 2. Projects.
>
> A concept of 'projects' (kinda like OSGi/JSR-whatsit), but in a big
> way. Each project is internally monolithic (meaning, you compile it in
> one go, and post compilation you can't replace or recompile individual
> classes, it's all or nothing. Class files no longer exist as such,
> there are just jar files), and contains a separate list of 'exported'
> calls (ordinary public methods are only public relative to the
> project, not anything outside of it).

Do you mean components? While I have nothing against components, I've
got nothing against standalone class files either. While it would be a
great option, I wouldn't force components on everyone.


> 4. Canonical AST.

How would this work amongst different compilers? If I was writing
"Scot's Fast Java++ Compiler", I might need a very different AST from
the standard Sun compiler and if there was an Oracle/Bea compiler that
might have it's own needs as well.


>
> Anyone else have big ideas?

Big ideas, no. Small ideas, plenty - multi-line strings, closures,
optional semicolons, properties, list comprehensions/python style
generators, and Scala style traits, to name a few. Basically, Groovy +
Python + Scala

Scot

Christian Catchpole

unread,
Jan 17, 2008, 3:32:48 PM1/17/08
to The Java Posse
Well, if its not eminently suited to building break dancing robots, I
don't want to know about it. :)

Peter Becker

unread,
Jan 17, 2008, 5:29:18 PM1/17/08
to java...@googlegroups.com
try this:

        dance: for (Robot robot : robots)
        {
            break dance;

Christian Catchpole

unread,
Jan 17, 2008, 8:05:12 PM1/17/08
to The Java Posse
Robot's have an unfair advantage in "doing the robot".

I was thinking the keywords "electric boogaloo", but "electric" is
somewhat redundant.

On Jan 18, 8:29 am, "Peter Becker" <peter.becker...@gmail.com> wrote:
> try this:
>
>         dance: for (Robot robot : robots)
>         {
>             break dance;
>         }
>
> On Jan 18, 2008 6:32 AM, Christian Catchpole <christ...@catchpole.net>

Scot

unread,
Jan 18, 2008, 6:16:57 AM1/18/08
to The Java Posse


On Jan 17, 8:32 pm, Christian Catchpole <christ...@catchpole.net>
wrote:
> Well, if its not eminently suited to building break dancing robots, I
> don't want to know about it. :)


If they make a version of the Squawk VM that supports Java3, then you
can build your break dancing dancing robot out of SunSPOTS (http://
www.sunspotworld.com)

Tiny robots + closures = world domination

Reinier Zwitserloot

unread,
Jan 18, 2008, 6:39:44 AM1/18/08
to The Java Posse
Scot, you're perhaps lacking some imagination.

On Jan 17, 7:49 pm, Scot <scot.mcsweeney.robe...@gmail.com> wrote:
> On Jan 17, 11:26 am, Reinier Zwitserloot <reini...@gmail.com> wrote:
>
> To me, a decent switch statement (like Groovy's) and reified generics
> are interesting. They're not pointless minutiae, they're the sorts of
> things that would make day to day coding in "Java++" better than
> coding in plain old Java.

Sure. But those conclusions are either A) obvious, or B) endless
discussions about taste, everyone wanting their own thing. Thus, not
interesting. What might be interesting is some sort of system whereby
you can see it your way, and your follow programmers see it their way,
the editor transforming the code (which semantically never changes) to
the preferences of the viewer on the fly.

>
> But wouldn't a version of Scala running on VM that supports your
> proposed features get those proposed features?

You'd have to reset the clock on scala to do it, and is it still scala
then?

>
> > 1. A real backwards compatibility preservation system.
>
> To me, this seems like it will lead to infinite code bloat, with the
> runtime constantly growing in size, just to accommodate long
> deprecated code.

Put a little more effort into it. As I already wrote, I envision that
the backwards compatibility is stored in 'diffs' versus the current
library. Those won't be very large, and why would you care anyway?
Your IDE won't show them to you, and your JRE won't download them
until you have code that needs em. And when you do have code that
needs em, you'll be VERY HAPPY that its there. You don't need to worry
about supporting more than the previous version in your new version;
older code will convert itself 'up the chain' to turn itself into code
that is accessible via the new API.

If for whatever reason you have a vendetta against old code, just toss
the 'v12tov11Library.jar' and you're done. It contained the diffs to
take v11 code into v12 land, as well as the original v11 version of
the AST as well as an AST converter to convert that AST into the v12
AST.

The point of this change is that you get both: You get perfect
backwards compatibility and yet you can reset the clock anytime you
think its a good idea to do so. That's not to say every new version
should redefine half the language; the concern that people need to
keep up with the changes remains. However, instead of ugly hacks like
type erasure to support generics without breaking backwards
compatibility, you can change whatever you like, no longer constrained
by such technical details.

> python does, is the way forward. @deprecated would actually mean
> something. Even better, bring in some form of future imports (again,
> like python) so you can start using new language features while you
> get your old code up to speed.

Again, a proper backwards compatibility system allows you to do this.
Just like you can ship 'diffs' that turn older versioned code into
something that can interface with newer versioned code, you can also
add a diff that turns newer versioned code (future code) into
something compatible with the at the time 'canonical' "current
version". It's future and deprecated, but done in such a way that it
actually works right.

>
> If backwards compatibility really is a must, maybe some form of "VM
> inside a VM", where old code runs in what is effectively an old VM
> with a communications layer that would perform the needed translations
> to/from the new code. Even better, compatibility could be an optional
> install,so if you don't need backward compatibility you don't have to
> install it.

Needlessly complicated and slow. You don't need to install a diff-like
jar either. It can be downloaded if you want to run old code
(automatically of course).

>
> > 2. Projects.
>
> > A concept of 'projects' (kinda like OSGi/JSR-whatsit), but in a big
> > way. Each project is internally monolithic (meaning, you compile it in
> > one go, and post compilation you can't replace or recompile individual
> > classes, it's all or nothing. Class files no longer exist as such,
> > there are just jar files), and contains a separate list of 'exported'
> > calls (ordinary public methods are only public relative to the
> > project, not anything outside of it).
>
> Do you mean components? While I have nothing against components, I've
> got nothing against standalone class files either. While it would be a
> great option, I wouldn't force components on everyone.

No. I'm talking about projects. I don't know about you, but I never
really work without a project scope of some sort. Even a single class
file. That's still a project, just a small one. The compiler would
just compile this into a jar file instead of a class file. That's all.
You don't really notice.

>
> > 4. Canonical AST.
>
> How would this work amongst different compilers? If I was writing
> "Scot's Fast Java++ Compiler", I might need a very different AST from
> the standard Sun compiler and if there was an Oracle/Bea compiler that
> might have it's own needs as well.

Then you transform the canonical AST into your own and go from there.

Viktor Klang

unread,
Jan 18, 2008, 7:25:12 AM1/18/08
to java...@googlegroups.com
What I envision is a gigantic distributed library repository, so you'll never have to ship _anything_,
you just click a link to an URL that manages everything for you. (Think Maven on crack and without the command-line scheisse)
 
Cheers,
-V

 
On 1/18/08, Reinier Zwitserloot <rein...@gmail.com> wrote:

Scot, you're perhaps lacking some imagination.




--
_____________________________________
/                                                                 \
        /lift/ committer ( www.liftweb.net)
      SGS member (Scala Group Sweden)
  SEJUG member (Swedish Java User Group)
\_____________________________________/

Casper Bang

unread,
Jan 18, 2008, 7:51:58 AM1/18/08
to The Java Posse
> Java3 - what would you do?

What any passionate developer would do: Start jumping up and down
while running around the office screaming "King Java 2 is dead, long
live the king, Java 3".

/Casper

Peter Becker

unread,
Jan 18, 2008, 8:59:12 AM1/18/08
to java...@googlegroups.com
Call me control freak but I never liked the idea of someone out there managing the libraries I use. I want to be able to check out a project version of two years ago and do a build as I did then. I might compromise on things like the JDK and Ant, storing their versions as references instead of checking them in (there is some trust there), but I wouldn't dare relying on external sources for every single library I use (my trust doesn't extend into e.g. Maven repositories) -- so I like building JARs with a dump of the Ant properties (thus getting Ant and JDK versions and probably also the VCS revision since I pulled that out somewhere) and all libraries are checked in completely with the JAR, a matching source zip, the licence and a little readme with version info and URL.

Everyone who did serious development with Maven seems to have ended up spending a lot of work doing lockdowns of dependencies and Maven plugins. It sounds pretty scary to me, although I have to admit that I never got beyond the "Hello World" stage with Maven myself (probably because I'm such a control freak or wuss).

But I guess that's a matter of taste (or requirements, which are a matter of taste). Just don't force me to use this -- you might find me ducking under the table :-)

  Peter

Jess Holle

unread,
Jan 18, 2008, 9:33:07 AM1/18/08
to java...@googlegroups.com
I'd concur 100%.

I can see that in an open source development model it would be a tremendous waste of resources on SourceForge, etc, to redundantly store all the dependencies of each project in each project rather than have some means of cross-referencing.

For those of us not in that world, however, this concern is minuscule whereas having all our dependencies exactly as we had them at each point in time, labeled in our source control system, etc, is much more important.

I probably go to extremes here as I keep the full source distributions around and source controlled as well -- so I can quickly look back at any point in time and look at the appropriate sources for any dependencies without having to go track them down again (or relying upon sites to still be up, etc).

All-in-all I see Maven as an economy of scale tool for open source development -- but of little interest to those needing solid control and traceability.

--
Jess Holle

Jess Holle

unread,
Jan 18, 2008, 9:33:59 AM1/18/08
to java...@googlegroups.com
Jess Holle wrote:
I'd concur 100%.

I can see that in an open source development model it would be a tremendous waste of resources on SourceForge, etc, to redundantly store all the dependencies of each project in each project rather than have some means of cross-referencing.

For those of us not in that world, however, this concern is minuscule whereas having all our dependencies exactly as we had them at each point in time, labeled in our source control system, etc, is much more important.

I probably go to extremes here as I keep the full source distributions around and source controlled as well -- so I can quickly look back at any point in time and look at the appropriate sources for any dependencies without having to go track them down again (or relying upon sites to still be up, etc).

All-in-all I see Maven as an economy of scale tool for open source development -- but of little interest to those needing solid control and traceability.
I should clarify -- I mean Maven repositories here, not the entirety of the tool.

--
Jess Holle

Viktor Klang

unread,
Jan 18, 2008, 9:49:16 AM1/18/08
to java...@googlegroups.com


On 1/18/08, Peter Becker <peter.b...@gmail.com> wrote:
Call me control freak but I never liked the idea of someone out there managing the libraries I use. I want to be able to check out a project version of two years ago and do a build as I did then. I might compromise on things like the JDK and Ant, storing their versions as references instead of checking them in (there is some trust there), but I wouldn't dare relying on external sources for every single library I use (my trust doesn't extend into e.g. Maven repositories) -- so I like building JARs with a dump of the Ant properties (thus getting Ant and JDK versions and probably also the VCS revision since I pulled that out somewhere) and all libraries are checked in completely with the JAR, a matching source zip, the licence and a little readme with version info and URL.
 
I think that you're jumping the gun on this.
I assume you have used BitTorrent - did you like it?
 
All it takes is a good versioning/packaging of dependencies in a manifest-file, something like:
 
needs JRE version > 1.6.0 noupgrade verify(URL_TO_VERIFIER_GOES_HERE)
needs log4j version > 1.4.0 autoupgrade verify(URL_TO_VERIFIER_GOES_HERE)
needs crap4u version = 1.5 verify(URL_TO_VERIFIER_GOES_HERE)
 
Combine this with SHA1/2, CRCs etc etc and trackers available everywhere
With some additional brainpower and some tech-savvy doods like ourselves, this could be a reality.
 
Just imagine installing a Java-application would be to click a link and have the JRE update N downloaded, a small application jar loaded and then the dependencies fed intravenously downstream.
 
The icing on the cake could be that all clients are distributing their copies of the dependencies á la true p2p
 
Cheers,
-V
 

 



        /lift/ committer (www.liftweb.net)

elpablo

unread,
Jan 18, 2008, 10:17:46 AM1/18/08
to The Java Posse
I was under the impression that it people generally maintained their
own maven repository inside the local network and so have control over
versioning and can treat their own internal libraries in the same was
as external libraries, all in one place.

On Jan 18, 1:59 pm, "Peter Becker" <peter.becker...@gmail.com> wrote:
> Call me control freak but I never liked the idea of someone out there
> managing the libraries I use. I want to be able to check out a project
> version of two years ago and do a build as I did then. I might compromise on
> things like the JDK and Ant, storing their versions as references instead of
> checking them in (there is some trust there), but I wouldn't dare relying on
> external sources for every single library I use (my trust doesn't extend
> into e.g. Maven repositories) -- so I like building JARs with a dump of the
> Ant properties (thus getting Ant and JDK versions and probably also the VCS
> revision since I pulled that out somewhere) and all libraries are checked in
> completely with the JAR, a matching source zip, the licence and a little
> readme with version info and URL.
>
> Everyone who did serious development with Maven seems to have ended up
> spending a lot of work doing lockdowns of dependencies and Maven plugins. It
> sounds pretty scary to me, although I have to admit that I never got beyond
> the "Hello World" stage with Maven myself (probably because I'm such a
> control freak or wuss).
>
> But I guess that's a matter of taste (or requirements, which are a matter of
> taste). Just don't force me to use this -- you might find me ducking under
> the table :-)
>
> Peter
>
> On Jan 18, 2008 10:25 PM, Viktor Klang <viktor.kl...@gmail.com> wrote:
>
> > What I envision is a gigantic distributed library repository, so you'll
> > never have to ship _anything_,
> > you just click a link to an URL that manages everything for you. (Think
> > Maven on crack and without the command-line scheisse)
>
> > Cheers,
> > -V
>

Casper Bang

unread,
Jan 18, 2008, 10:39:01 AM1/18/08
to The Java Posse
> I was under the impression that it people generally maintained their
> own maven repository inside the local network and so have control over
> versioning and can treat their own internal libraries in the same was
> as external libraries, all in one place.

That's certainly how we do it anyway, seems to work great.

/Casper

Reinier Zwitserloot

unread,
Jan 18, 2008, 2:35:44 PM1/18/08
to The Java Posse
On Jan 18, 2:59 pm, "Peter Becker" <peter.becker...@gmail.com> wrote:
> Call me control freak but I never liked the idea of someone out there
> managing the libraries I use.

But that's the beauty of it. When versioning of projects is a major
part of the language itself, ALL libraries are versioned as well, and
all source files carry inside them dependencies including versions.
Your project will use the same version of the same library until the
end of times, or, of course, until you dive in there and update them.
With tactical version numbering you can make the usual distinction
between api-compatible serious bug and security fixes which do get
automatically updated, minor updates which you do or don't want to
auto-use, and major updates that you have to manually confirm. Even if
in virtually 100% of the circumstances it'll just work (due to the
advanced support for letting libraries integrate with their own old
versions). The usual repository flexibility would of course be there
so that you can clone the stuff you need from the main repository and
keep everything 'in house' if you don't trust the main repository too
much or some such.

I don't want to get hung up on details here. Such issues can be fixed.


>
> Everyone who did serious development with Maven seems to have ended up
> spending a lot of work doing lockdowns of dependencies and Maven plugins. It
> sounds pretty scary to me, although I have to admit that I never got beyond
> the "Hello World" stage with Maven myself (probably because I'm such a
> control freak or wuss).

I don't have that much experience with Maven myself. Does maven not
allow you to make a reference to a specific version or some such?
Personally I'd say that, given that all APIs are versioned one way or
another, any system that doesn't allow you to specify the target
version is broken by design.

>
> But I guess that's a matter of taste (or requirements, which are a matter of
> taste). Just don't force me to use this -- you might find me ducking under
> the table :-)

No worries; you always need a way to deploy a full version without
needing further internet access one way or another.

Peter Becker

unread,
Jan 18, 2008, 4:45:27 PM1/18/08
to java...@googlegroups.com
On Jan 19, 2008 5:35 AM, Reinier Zwitserloot <rein...@gmail.com> wrote:

On Jan 18, 2:59pm, "Peter Becker" <peter.becker...@gmail.com> wrote:
> Call me control freak but I never liked the idea of someone out there
> managing the libraries I use.

But that's the beauty of it. When versioning of projects is a major
part of the language itself, ALL libraries are versioned as well, and
all source files carry inside them dependencies including versions.
Your project will use the same version of the same library until the
end of times, or, of course, until you dive in there and update them.
With tactical version numbering you can make the usual distinction
between api-compatible serious bug and security fixes which do get
automatically updated, minor updates which you do or don't want to
auto-use, and major updates that you have to manually confirm. Even if
in virtually 100% of the circumstances it'll just work (due to the
advanced support for letting libraries integrate with their own old
versions). The usual repository flexibility would of course be there
so that you can clone the stuff you need from the main repository and
keep everything 'in house' if you don't trust the main repository too
much or some such.

I don't want to get hung up on details here. Such issues can be fixed.

I guess I was jumping the gun a little bit -- in the end it comes down to a question of trust, not technology. I trust some Linux distributions enough to run production servers using them, but with the whole Maven/Ivy story I still see two differences: (a) a culture/history of quality doesn't seem to be established and (b) a server can be replaced relatively easily, my product is more important to me.

Can I see it working? Yes, but I would need a lot of trust in the organization or community providing that service. It's not only about getting it right in the first place (right dependencies, properly build libraries) but also about maintaining the structure and contents for a long time. I want to be able to keep building the same thing for a number of years, so I have to rely on the system being able to produce the same dependencies and the same libraries with the same API (since I will have an old build configuration in my VCS).

>
> Everyone who did serious development with Maven seems to have ended up
> spending a lot of work doing lockdowns of dependencies and Maven plugins. It
> sounds pretty scary to me, although I have to admit that I never got beyond
> the "Hello World" stage with Maven myself (probably because I'm such a
> control freak or wuss).

I don't have that much experience with Maven myself. Does maven not
allow you to make a reference to a specific version or some such?
Personally I'd say that, given that all APIs are versioned one way or
another, any system that doesn't allow you to specify the target
version is broken by design.

Maven does let you specify target versions, yes.
 

man...@mosabuam.com

unread,
Jan 18, 2008, 4:56:51 PM1/18/08
to java...@googlegroups.com
Quoting Peter Becker <peter.b...@gmail.com>:

> On Jan 19, 2008 5:35 AM, Reinier Zwitserloot <rein...@gmail.com> wrote:
>
> I guess I was jumping the gun a little bit -- in the end it comes down to a
> question of trust, not technology. I trust some Linux distributions enough
> to run production servers using them, but with the whole Maven/Ivy story I
> still see two differences: (a) a culture/history of quality doesn't seem to
> be established and (b) a server can be replaced relatively easily, my
> product is more important to me.

Debian is currently in the process of including Maven and in the
longer run including some sort of controlled repository (or mapping to
it) in its repositories. That should be good enough. Until then
nothing stops you from running your internal repository server and
controlling it all internally down to the version of each transitive
dependency. Doing that with maven is still way better than doing it
without IMHO...

> Can I see it working? Yes, but I would need a lot of trust in the
> organization or community providing that service. It's not only about
> getting it right in the first place (right dependencies, properly build
> libraries) but also about maintaining the structure and contents for a long
> time. I want to be able to keep building the same thing for a number of
> years, so I have to rely on the system being able to produce the same
> dependencies and the same libraries with the same API (since I will have an
> old build configuration in my VCS).

which maven can do just fine...

>> > Everyone who did serious development with Maven seems to have ended up
>> > spending a lot of work doing lockdowns of dependencies and Maven
>> plugins. It
>> > sounds pretty scary to me, although I have to admit that I never got
>> beyond
>> > the "Hello World" stage with Maven myself (probably because I'm such a
>> > control freak or wuss).
>>
>> I don't have that much experience with Maven myself. Does maven not
>> allow you to make a reference to a specific version or some such?

Yes it does. It also allows you to control transitive dependencies
where necessary as well as controlling the repository everything is
retrieved from..

manfred

Peter Becker

unread,
Jan 18, 2008, 6:58:26 PM1/18/08
to java...@googlegroups.com
On Jan 19, 2008 7:56 AM, <man...@mosabuam.com> wrote:

Quoting Peter Becker <peter.b...@gmail.com>:

> On Jan 19, 2008 5:35 AM, Reinier Zwitserloot < rein...@gmail.com> wrote:
>
> I guess I was jumping the gun a little bit -- in the end it comes down to a
> question of trust, not technology. I trust some Linux distributions enough
> to run production servers using them, but with the whole Maven/Ivy story I
> still see two differences: (a) a culture/history of quality doesn't seem to
> be established and (b) a server can be replaced relatively easily, my
> product is more important to me.

Debian is currently in the process of including Maven and in the
longer run including some sort of controlled repository (or mapping to
it) in its repositories. That should be good enough. Until then
nothing stops you from running your internal repository server and
controlling it all internally down to the version of each transitive
dependency. Doing that with maven is still way better than doing it
without IMHO...

I think it really depends on the context. I personally find the extra effort of mainting my dependencies myself not that bad, but that is partly since (a) I do not currently work in the context of a large organisation where you would set organisation-wide standards and (b) I am shy about introducing dependencies in the first place ( a.k.a. "suffering from NIHS").

In the context of a large organization where you can afford to have dedicated staff for library management an approach as the one Maven/Ivy take seems useful. It's not really the approach that worries me, it is the question of the quality of the repository. And the idea of the one global repository that rules them all just scares me -- it just makes too much of a focal point for breakage or even attack. And that seemed to be the proposal that started me off.
 
> Can I see it working? Yes, but I would need a lot of trust in the
> organization or community providing that service. It's not only about
> getting it right in the first place (right dependencies, properly build
> libraries) but also about maintaining the structure and contents for a long
> time. I want to be able to keep building the same thing for a number of
> years, so I have to rely on the system being able to produce the same
> dependencies and the same libraries with the same API (since I will have an
> old build configuration in my VCS).

which maven can do just fine...

Maven can do it if the repositories play along. As said above: it's not the actual dependency management that worries me, it is what I get when I ask to fulfill a dependency. And I think I'd be ending up specifying very specific versions since I have seen to many breakages based on patch releases -- and I'd include that lockdown for any transitive dependency. If you do that the benefit of automating it seems a bit low. AFAIK Maven itself has a history of breaking with not only minor, but even patch releases, which doesn't seem to be a good start for gaining any trust in dependency management.
 


>> > Everyone who did serious development with Maven seems to have ended up
>> > spending a lot of work doing lockdowns of dependencies and Maven
>> plugins. It
>> > sounds pretty scary to me, although I have to admit that I never got
>> beyond
>> > the "Hello World" stage with Maven myself (probably because I'm such a
>> > control freak or wuss).
>>
>> I don't have that much experience with Maven myself. Does maven not
>> allow you to make a reference to a specific version or some such?

Yes it does. It also allows you to control transitive dependencies
where necessary as well as controlling the repository everything is
retrieved from..

Another story I've heard is that you sometimes get more than you ask for, e.g. log4j if you get commons.logging. Of course you shouldn't really use commons.logging -- particularly not with the java.util.logging (AFAIK the wrapper still creates a Throwable for every single log message), but that is a different issue ;-)

Which dependencies of a particular library (and which parts of it in the first place) you really need often depends on what you do with it. For example Batik can be easily broken down in pieces and while a dependency management system can remodel that with virtual packages I don't get the impression that Maven does that. JPackage seems to -- the Linux community has more of a tradition in that regard.

Again: in theory the idea could work, but it is a hard thing to solve that on a global scale (even if "global" should just mean "in our organisation" it already gets hard) and so far
I haven't seen anyone I'd trust on solving it, so I tend to stick with the much easier problem of solving the dependency management on a per-project basis.

In some ways that is redundant, but in many ways easier. I guess you could compare it with the "copy and paste vs. creating abstractions" in source code, where we are all guilty of occasionally doing the former, aren't we? It's a tradeoff and so far the added complexity of solving the abstract problem seems too much compared to the net gain of having it solved.

Not that I would mind someone solving it, it's just that Maven still doesn't seem anywhere near. Maybe from the technical point of view, but not in terms of processes around the technology.

But maybe I am a control freak -- people have claimed that occasionally :-)

  Peter

Reinier Zwitserloot

unread,
Jan 18, 2008, 9:17:08 PM1/18/08
to The Java Posse
Let's not get too hung up on specifics - not what this thread is
about. What I envisioned is something roughly like this:

Each project carries with it a canonical name, which so happens to be
a URI that should probably be a valid URL for any project you want to
share with the world. Each project also carries with it a dependency
list, in the form of a list of these canonical URI thingies. This way
you know exactly where to go for more information (and, in fact, there
are file protocols and such for going to the project's URL and thus
being able to find descriptions, change logs, downloads, and other
stuff), and there are nice canonical names. Nothing is stopping you
from shipping the requisite projects (no download will even be
triggered) or running some sort of repository yourself. The point is
basically that there's a single, uniform, official way to 'host' your
projects if you want to, so that all tools, not just a maven-ish
construct, can get public information in a standardized way if it
seems useful to do so.

Right now you NEED a maven for such things, there is no way for e.g.
javac to figure out that it is possible to resolve an 'import
org.joda.time.format.*;' by heading on over to the jodatime side and
downloading the jar. There is no way for a UML diagrammer tool
construct to go in and fetch an icon for the external dependency. It's
small stuff individually, but the notion is very very powerful. This
way you don't even need a central repository. I would simply suggest
that sun or some other entity runs a free and open hosting space where
you can very easily dump your stuff, to make sure this system reaches
its full potential.

I don't really think shipping strategies will change that much. Most
server-side projects just ship with all dependencies included - easier
and why wouldn't you do that anyway? That isn't really a problem now,
either. The 'problem' I see is that there's simply more information
that your tools should be able to figure out for themselves, and they
aren't doing so now. I envision an eclipse telling you that project
XYZ has triggered a major security bug and if you would like to just
update it now with pretty much 100% guarantee your code will continue
working as before? I envision the ability to 'go into' (cmd/ctrl
+click) the source of some third party open source project, with my
tooling downloading it on the fly if required and if available. Stuff
like that. Such a system is also a built-in webstart: Simply 'run' a
project (identified by a URL) that you haven't even downloaded. The
default action is to go to that URL and ask what to do, which will
lead to webstarty things.

On Jan 19, 12:58 am, "Peter Becker" <peter.becker...@gmail.com> wrote:
> On Jan 19, 2008 7:56 AM, <manf...@mosabuam.com> wrote:
>
>
>
>
>
> > Quoting Peter Becker <peter.becker...@gmail.com>:
>
> > > On Jan 19, 2008 5:35 AM, Reinier Zwitserloot <reini...@gmail.com> wrote:
>
> > > I guess I was jumping the gun a little bit -- in the end it comes down
> > to a
> > > question of trust, not technology. I trust some Linux distributions
> > enough
> > > to run production servers using them, but with the whole Maven/Ivy story
> > I
> > > still see two differences: (a) a culture/history of quality doesn't seem
> > to
> > > be established and (b) a server can be replaced relatively easily, my
> > > product is more important to me.
>
> > Debian is currently in the process of including Maven and in the
> > longer run including some sort of controlled repository (or mapping to
> > it) in its repositories. That should be good enough. Until then
> > nothing stops you from running your internal repository server and
> > controlling it all internally down to the version of each transitive
> > dependency. Doing that with maven is still way better than doing it
> > without IMHO...
>
> I think it really depends on the context. I personally find the extra effort
> of mainting my dependencies myself not that bad, but that is partly since
> (a) I do not currently work in the context of a large organisation where you
> would set organisation-wide standards and (b) I am shy about introducing
> dependencies in the first place (a.k.a. "suffering from NIHS").

Alexey Zinger

unread,
Jan 19, 2008, 11:41:31 AM1/19/08
to java...@googlegroups.com
If we're dreaming, let's go all the way and make it so dependencies could be
following in an extensible way. If I'm not mistaken, Maven currently can't
download from SourceForge. Imagine if repositories worked similar to resource
bundles, where a jar could be pulled as a straight HTTP request, or a manifest
file describing a dependency for a library that knows how to pull from that
particular site and a URI for the target itself. So SourceForge could post
their custom dependency retriever library. And if you don't like theirs, you
can roll your own.


Alexey
2001 Honda CBR600F4i (CCS)
1992 Kawasaki EX500
http://azinger.blogspot.com
http://bsheet.sourceforge.net
http://wcollage.sourceforge.net

____________________________________________________________________________________
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

Reinier Zwitserloot

unread,
Jan 19, 2008, 1:34:38 PM1/19/08
to The Java Posse
Yeah, that would be excellent stuff.

On Jan 19, 5:41 pm, Alexey Zinger <inline_f...@yahoo.com> wrote:
> If we're dreaming, let's go all the way and make it so dependencies could be
> following in an extensible way.  If I'm not mistaken, Maven currently can't
> download from SourceForge.  Imagine if repositories worked similar to resource
> bundles, where a jar could be pulled as a straight HTTP request, or a manifest
> file describing a dependency for a library that knows how to pull from that
> particular site and a URI for the target itself.  So SourceForge could post
> their custom dependency retriever library.  And if you don't like theirs, you
> can roll your own.
>
> --- Viktor Klang <viktor.kl...@gmail.com> wrote:
> > >  On Jan 18, 2008 10:25 PM, Viktor Klang <viktor.kl...@gmail.com> wrote:
>
> > > >  What I envision is a gigantic distributed library repository, so you'll
> > > > never have to ship _anything_,
> > > > you just click a link to an URL that manages everything for you. (Think
> > > > Maven on crack and without the command-line scheisse)
>
> > > > Cheers,
> > > > -V
>
> > > >         /lift/ committer (www.liftweb.net)
> > > >       SGS member (Scala Group Sweden)
> > > >   SEJUG member (Swedish Java User Group)
> > > > \_____________________________________/...
>
> read more »

Villane

unread,
Jan 19, 2008, 7:53:24 AM1/19/08
to The Java Posse
1. I'm sorry, but managing versioned dependencies on a single type
level is an absurd idea. If you use a particular version of a library
you just have to use it all, not take bits from one version and bits
from another. Anything else would be a nightmare for both users and
authors. Managing dependency versions must be done on the bundle
level, like OSGi and Maven do, and by bundle I mean what you call
"project" in your second point.
And specifying the source/target language version should also be done
on the same level.

The other ideas I can somewhat agree with in general, although not
fully:

2. Sounds good, but I probably wouldn't force this model on everyone
-- the language should ideally scale down to scripting like Scala

3. I like this idea, but I'm not sure it would work that well. Has
this been done in some language already?

4. This would be very useful indeed, I think.

5. The preambles should be a recommended best practice, not forced.
Sometimes you just have to trust people to do the right thing, not
stop them from doing the wrong by introducing artificial barriers.

Reinier Zwitserloot

unread,
Jan 19, 2008, 4:18:20 PM1/19/08
to The Java Posse


On Jan 19, 1:53 pm, Villane <vill...@gmail.com> wrote:
> 1. I'm sorry, but managing versioned dependencies on a single type
> level is an absurd idea. If you use a particular version of a library
> you just have to use it all, not take bits from one version and bits
> from another. Anything else would be a nightmare for both users and
> authors. Managing dependency versions must be done on the bundle
> level, like OSGi and Maven do, and by bundle I mean what you call
> "project" in your second point.
> And specifying the source/target language version should also be done
> on the same level.

Any given PROJECT is normally bound to use only one version of a
particular dependency (also a project), however, any app can consist
of lots of projects and they don't all have to use the same version.
It would obviously work a lot smoother if you do, but, in real life
not everyone upgrades at the same time, which gives us the current
backwards compatibility obsession. I don't have all the details, but
clearly a lot of effort ought to be expended on figuring out a way to
get this backwards compatibility thing right, instead of realizing at
some later point in time that we're stuck with a mistake forever.

In particular, I don't see the problem with having a type system that
understands versions. I could create an older version of a given
object, and I can pass it around. In practice EVERY type reference is
versioned, though for the vast majority of them you don't specify
anything, and the version is gleaned from the project dependency list
instead. If you pass a v1.4 Foobar into a method that actually expects
a v1.5, then a special constructor of sorts is called on the v1.5
Foobar code, given the data of the v1.4 Foobar object, and asked to
'wrap' in such a way that it just works. Now, either libraries care
and implement this functionality (it's easier than it seems, see next
paragraph), or if they really don't want to, they don't, and you'll
get a compile time (=write time) error explaining that due to version
mismatch, you can't use these together.

It's not that hard, for two reasons: 1. it should be easy in such a
system to specify (or the system will even make this assumption unless
you override it) that the APIs are completely compatible and there's
nothing to worry about, and 2. if v1.5 can wrap around v1.4 code, and
v1.4 code can wrap around v1.3 code, you can use v1.3 code in v1.5, by
wrapping it twice. Therefore, anytime you break API, you just need to
make your new API capable of working with the version before that, and
you automatically get backwards compatibility to all versions that the
previous version could work with.

>
> 3. I like this idea, but I'm not sure it would work that well. Has
> this been done in some language already?

Not that I know of, at least not including the whole errors and
warnings system and the like. Most languages take some sort of pride
in being restricted to text editors. Smalltalk does some interesting
stuff in regards to language/editing intelligence integration.

>
> 5. The preambles should be a recommended best practice, not forced.
> Sometimes you just have to trust people to do the right thing, not
> stop them from doing the wrong by introducing artificial barriers.

You probably have to allow preamble-like stuff (redefining methods,
adding operators, maybe even changing control constructs) in
individual files as well, but I strongly believe that fully dynamic
scoping of such things should not ever be allowed in such a language.

Villane

unread,
Jan 20, 2008, 5:51:01 AM1/20/08
to The Java Posse
On Jan 19, 11:18 pm, Reinier Zwitserloot <reini...@gmail.com> wrote:
> Any given PROJECT is normally bound to use only one version of a
> particular dependency (also a project), however, any app can consist
> of lots of projects and they don't all have to use the same version.

Yes, and for example OSGi allows that currently. (of course, only if
these different versions don't actually "touch" anywhere)

> In particular, I don't see the problem with having a type system that
> understands versions. I could create an older version of a given
> object, and I can pass it around. In practice EVERY type reference is
> versioned, though for the vast majority of them you don't specify
> [...]
> It's not that hard, for two reasons: 1. it should be easy in such a
> system to specify (or the system will even make this assumption unless
> you override it) that the APIs are completely compatible and there's
> nothing to worry about, and 2. if v1.5 can wrap around v1.4 code, and
> v1.4 code can wrap around v1.3 code, you can use v1.3 code in v1.5, by
> wrapping it twice. Therefore, anytime you break API, you just need to
> make your new API capable of working with the version before that, and
> you automatically get backwards compatibility to all versions that the
> previous version could work with.

Ok, maybe my initial reaction was too negative, I'm beginning to see
that this could work, somewhat. At first I thought this would just
create extra complexity, but actually it might move the complexity to
the place where it belongs -- in the library design, not left upon the
users to figure out. But, I think this would only work if there would
be good tooling to support this! Both on the library author's side and
user's side.
Author's side: I should be able to point my IDE to a previous tagged
version in the code repository and say "calculate the backwards
compatibility between the current version and this" (perhaps some
release engineering support in the IDE could even let me say "previous
release" instead of pointing to an explicit version) and after
reviewing the changes, an automated refactoring would allow me to
automatically create conversions from the old types to the new types
where necessary.
It would be extra fine if I could keep the current version's design
clean (uncluttered by the compatibility stuff) and the conversions
would be in a different class.
User's side: should be able to get an overview of which versions of
libraries are used (warnings similar to "deprecation" if using older
versions than the most recent used), possibly automated refactorings
to switch to new versions.

Reinier Zwitserloot

unread,
Jan 20, 2008, 9:02:33 AM1/20/08
to The Java Posse
On Jan 20, 11:51 am, Villane <vill...@gmail.com> wrote:
> On Jan 19, 11:18 pm, Reinier Zwitserloot <reini...@gmail.com> wrote:
>
> Ok, maybe my initial reaction was too negative, I'm beginning to see
> that this could work, somewhat. At first I thought this would just
> create extra complexity, but actually it might move the complexity to
> the place where it belongs -- in the library design, not left upon the
> users to figure out. But, I think this would only work if there would
> be good tooling to support this! Both on the library author's side and
> user's side.

Yes, it's not exactly easy even if such a system existed. Writing
tests for this stuff is a pain just to start with.

I also don't imagine ALL libraries will jump on this bandwagon. The
core libraries (the equivalent to java.lang and java.util and the
like) certainly would; backwards compatibility in core is crucial.
Certain high profile libraries that many people use (and I expect core
to be much smaller because of the ease of including other projects)
will probably also get the backwards compatibility treatment when the
maintainers decide to make a breaking change. I don't expect that a
whole batch of breaking changes will show up at every version upgrade;
there are still users to re-educate. Having the option to undo things
which in retrospect were wrong is priceless though.

Because of all the other stuff (canonical AST, project-based) I
imagine tooling is a first class citizen amongst concerns. Right now
most java features are discussed in a tooling-free void, which is
kinda stupid if you ask me, because the overwhelming majority of
halfway serious java programmers use some IDE or other (and I have
gone on record stating that programming java in a notepad-equivalent
text buffer is stupid in the extreme. Use a language that is better
designed for such stuff, such as jython, jruby or scala!).

> Author's side: I should be able to point my IDE to a previous tagged
> version in the code repository and say "calculate the backwards
> compatibility between the current version and this" (perhaps some
> release engineering support in the IDE could even let me say "previous
> release" instead of pointing to an explicit version) and after
> reviewing the changes, an automated refactoring would allow me to
> automatically create conversions from the old types to the new types
> where necessary.
> It would be extra fine if I could keep the current version's design
> clean (uncluttered by the compatibility stuff) and the conversions
> would be in a different class.

Good idea.

> User's side: should be able to get an overview of which versions of
> libraries are used (warnings similar to "deprecation" if using older
> versions than the most recent used), possibly automated refactorings
> to switch to new versions.

This is why the list is supposed to be taken together. With a
canonical AST, it is quite feasible for libraries to ship auto-
conversion tools. These take in ASTs, analyse them for calls to the
old version of the library, and automatically convert them to the new
call system. For those changes which cannot be automated, it simply
adds todos (the same way any AST job can add errors or warnings). For
those changes which aren't 100% semantically equivalent, it adds a
'warning node' that the author can later walk through, verify, and
remove.

From time to time you'd ask your IDE to do a code update for you, and
it'll zip right through, changing your code where needed. After that
you walk through the new todos and warnings (if any), address them,
and you're up to date. The tooling can work out if you want to
manually verify every change, or if you trust the code conversion
script of the library you use.

hlovatt

unread,
Jan 21, 2008, 3:40:43 AM1/21/08
to The Java Posse
I agree with your points, particularly point 1 (source statement). I
suggested this as an RFE. You can vote for it here:

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6519124

On Jan 17, 10:26 pm, Reinier Zwitserloot <reini...@gmail.com> wrote:
> I sometimes hear discussions about what you would do if you could
> throw backwards compatibility out the window, or how you'd set up a
> java3, etc, etc. Invariably these discussions get bogged down in
> pointless minutiae like 'I'd make switch statements more sensical' or
> even "I'd make generics reifable". That's not at all interesting.
> here's my list of what I'd do better. Note that I firmly believe that
> a language that has all of these features will beat the absolute pants
> off of scala, which looks like a kindergarten's toy in comparison.
>
> 1. A real backwards compatibility preservation system.
>
> This system allows you to completely redesign the language if you
> wanted between versions, and old and new code, both source and binary,
> will happily coexist and chat with each other without any problems.
> For source compatibility, such a construct requires you to specify the
> version of your source files (you'd add a "source 1.5" at the top or
> some such, or you'd do so at the package or project level), and the
> compiler will automatically realize that e.g. in a 1.6 or below
> source,array literals create arrays, whereas in 1.7 or up, they
> actually create List<T>s instead, just to give an example. On the
> binary side, all versions of the runtime ship with both the latest
> core libraries and 'diffs' between the latest core library and each
> previous version. A 'source 1.5' file would use the 1.5 version of
> those libraries, and e.g. a java.util.Date.getHour() method wouldn't
> even exist here. You can explicitly use older versions (in case you
> need to integrate with old code or some such) using types that carry
> the target version (Date{1.4}), and types are capable of carrying
> their own conversion code so that an 'old' Date can turn itself into a
> 'new' Date and vice versa.
>
> ALL language designers screw up at some point or other. So far
> languages have either decided to live with em (java), or to reset the
> clock every so often (python 3000) and completely break compatibility.
> I don't have a complete spec for how to pull this off but I'm sure
> it's possible to do this right. The one major beef I have with scala
> is the perlish cartoon swearing. There are way too many cutesy
> operators in the scala preamble (scala.lang.*), which is stupid,
> because scala actually allows you to import anything into your current
> lexical scope if you think you really need "/:" as an operator analogy
> to foldLeft for some block of code. Absolutely no need to foist that
> on everybody. Because there's a book now, taking these mistakes out is
> no longer realistic, because scala has no way of tracking target
> versions. Whoops!
>
> 2. Projects.
>
> A concept of 'projects' (kinda like OSGi/JSR-whatsit), but in a big
> way. Each project is internally monolithic (meaning, you compile it in
> one go, and post compilation you can't replace or recompile individual
> classes, it's all or nothing. Class files no longer exist as such,
> there are just jar files), and contains a separate list of 'exported'
> calls (ordinary public methods are only public relative to the
> project, not anything outside of it). This enables excellent
> optimizations (dead code elimination and finding non-exported classes
> that can effectively be made final), and is a great boon for tooling
> because they make far more assumptions about the code base that way.
> Think about it; figuring out code coverage is so much easier if you
> have a relatively small set of 'inroads' (all exported stuff + any
> main method).
>
> There are issues to be worked out in regards to transporting objects
> between projects, but I'm sure a smart solution can be found.
>
> 3. Generators.
>
> Imagine for a moment that java allowed you to specify 'literal
> builders'. These literal builders would be given a string. They emit
> code that can call 'special' (private-ish) constructors, as well as
> AST-based errors and warnings (from character Y to character Z, error:
> "open parenthesis not matched with a closing one." or some such). For
> the regexp system, you would be able to construct them as a literal,
> and eclipse/IDEA/NetBeans would tell you on the spot, as you write it,
> that your regexp is broken. Contrast this to the current situation
> where you need a load of ugly escapes AND you need to catch regexp
> compile exceptions eventhough you know full well your regexp is error
> free (or you use an alternative method name that signals you know what
> you're doing). That stuff is testable at compile time, so it should be
> done at compile time (=write time for a sufficiently smart IDE). You
> could use this for SQL literals, XML literals, regexp literals, JSON
> literals, and many many other forms of literals. (The specialish
> constructor comes in handy here so your regexp compiler can compile
> the whole thing to a regexp finite state machine, and supply THIS to
> the special constructor. This constructor knows the node list is from
> the generator and thus doesn't need to double check it for illegal
> arguments and the like, it can just accept it as being error free.
> Hey, look, we speeded up our code at runtime AND made it more readable
> at write time! Win!)
>
> Of all things scala has solved in libraries (using operator
> overloading and the like), one of the few things they hardcoded in the
> spec is.... support for XML literals. Whoops! Yet another scala
> mistake. I love scala to bits, but with some imagination you can
> already see the whiners' standard arguments why 'scala is the old and
> crappy and we should use this shiny new thing!'. Generators move this
> stuff to the Library which even means (see suggestion #1) you can take
> some outdated formats out in the future with no fuss, and anyone can
> write new ones.
>
> 4. Canonical AST.
>
> Right now the various java tools use various means to turn java code
> into tokens and from there into code, errors, warnings, 'findbugs'-
> like suggestions, javadocs, etcetera, etcetera. Unfortunately this
> means there is no standard. Eclipse generates wildly different error
> messages than e.g. netbeans, and they all differ slightly in how code
> is rendered and the like. By standardizing this and shipping it along
> with the VM, you make life easy for tool writers, you standardize
> across tools, and you open the door to 'tool manipulators' - think APT
> but then in a much bigger way. In such a world, you could plug java5
> into an eclipse written before java5 ever existed and it would just
> work, including the proper errors, warnings, etcetera. Maybe eclipse
> would come out later with an 'add on' with extra quick fixes and more
> detailed (find-bugsian) warnings, but that construction is not bound
> to the IDE and you could use the netbeans extra set on eclipse and
> vice versa. Find Bugs would just be part of your IDE, flagging code as
> you type it same as the usual syntax errors. To do this, find bugs
> just writes one project to the standard AST model and all IDEs can
> just use it, no need for an eclipse plugin, a netbeans plugin, a this
> plugin, a that plugin, an ant plugin, etc, etc, etc. Just works.
>
> 5. Pre-ambles.
>
> DSLs are nice, but on the other hand they allow you to create some
> really weird code. There are no 'puzzlers for ruby', because the book
> would be a 5000000 page long horror story; due to the dynamics of
> ruby, no single line of code can be relied upon to do anything unless
> you know exactly what has gone before at runtime, which is halting
> problem hard to do. Scala does a better job at it; its DSL shenanigans
> are not runtime-scoped, but lexically scoped. You at least know where
> to look for the code that redefines what String.toLowerCase() does,
> for example. However, going through all that for every file is still
> annoying, as is writing some custom change into every file for a
> project. Thus, pre-ambles: Each project (see suggestion #2) has one
> place where you can decide to define that 'anyList.sort();' is a
> shortcut for "Collections.sort(anyList);" for example. Anyone that
> wants to even begin reading your code first has to read the pre-amble,
> but once he has, he can read all code in the entire project with a
> pretty good idea of how things work. As a bonus, IDEs can easily be
> fully aware of each project's pre-amble. Rails (or other DSL heavy
> stuff) for java would then basically become a 'include
> rails.preamble.java' in your projects preamble.
Reply all
Reply to author
Forward
0 new messages