Using released versions for STARR

499 views
Skip to first unread message

Grzegorz Kossakowski

unread,
Jun 18, 2013, 1:51:47 PM6/18/13
to scala-internals
Hi,

I just wanted to draw attention of everybody to PR 2658.

Adriaan proposes that we switch to using released, tagged versions of the compiler for STARR. Note that he proposes to keep the old mechanism of using arbitrary STARRs around for a while. We are just switching the default. The old mechanism would serve as safety net in case we run into a case where sticking to released version is not possible. However, the intention is to eventually move to tagged versions for STARR completely.

We discussed STARR process in the past in this thread. I'd like to draw your attention to the post by Lukas. He mentions a specific scenario where a new STARR is needed. In the discussion he says:

I could hack the compiler to search in both places, but we don't want that code to be
committed either.

That's probably the bit we need to revisit. I'd argue that having the transition handled in that way is probably the best. If we have separate commits in the history dedicated to making the compiler to work with both old and new version of the library then it's easier to understand the transition. At least, I'd be completely ok with such practice and I don't consider this to be pollution of the history.

In exchange for a bit more engineering around the transition we get real, tagged stable reference compilers that Josh was advocating for. I agree with all his points.

In the same thread Eugene said:

In my experience it's rarely possible to have a starr as a separate commit.

Eugene, could you elaborate? Could you show a scenario where it's strictly to not possible to work-around transition problem the way Lukas described: search for symbols both in old and new location.

--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

Adriaan Moors

unread,
Jun 18, 2013, 2:01:03 PM6/18/13
to scala-i...@googlegroups.com
Also, note that this is crucial for modularizing the Scala compiler.
If we want, say, partest to be a separate module at a separate github project,
we must be able to compile it with a released version of Scala while using it to test a more recent development version.

The new defaults for PR validation enforce this set up so that we can gain some experience with it before making the split.


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Eugene Burmako

unread,
Jun 18, 2013, 2:05:38 PM6/18/13
to <scala-internals@googlegroups.com>
I'd also like to mention the question of quasiquotes. Prohibiting scalac from being built with transient starrs means that we will significantly slow down adoption of quasiquotes in our codebase.

Grzegorz Kossakowski

unread,
Jun 18, 2013, 2:08:47 PM6/18/13
to scala-internals
On 18 June 2013 14:05, Eugene Burmako <eugene....@epfl.ch> wrote:
I'd also like to mention the question of quasiquotes. Prohibiting scalac from being built with transient starrs means that we will significantly slow down adoption of quasiquotes in our codebase.

We plan to use milestones as STARR so the lag wouldn't be too big.

Also, I think compiler should be the last place where quasiquotes are used once we know they are fairly stable and tested. In general, in compiler we should stick to battle-tested libraries.

Eugene Burmako

unread,
Jun 18, 2013, 2:13:31 PM6/18/13
to scala-i...@googlegroups.com
Yes, Lukas' example is a very nice illustration for the problem at hand. I think his proposed solution is quite good here. I think though it's not a problem that we need some hacking around to handle this situation. The hack still can be made in a separate commit and documented, which is both sufficient and scalable.

I also have to retract my assessment about rarely having a possibility to have a starr as a separate commit. I used to work in an area of the compiler that was in constant flux and that touched a lot of stuff (e.g. classtag refactorings). In that area it was indeed very frequent to have to bend over backwards to perform changes. However nowadays I very infrequently find myself in need of doing non-trivial things wrt starr. It's almost always "change something, commit, rebuild a starr, commit, clean up old stuff, commit".

Eugene Burmako

unread,
Jun 18, 2013, 2:18:15 PM6/18/13
to <scala-internals@googlegroups.com>
Quasiquotes are the ultimate way to work with trees. The readability difference between manual construction (and deconstruction) and quasiquotes is humongous.

The quest of improving the quality of our codebase has been very difficult. Even tiny improvements have been achieved with significant effort. Quasiquotes offer something really big in this area, therefore I think it's useful to think twice about being overly conservative.


Grzegorz Kossakowski

unread,
Jun 18, 2013, 2:19:12 PM6/18/13
to scala-internals
On 18 June 2013 14:13, Eugene Burmako <eugene....@epfl.ch> wrote:
Yes, Lukas' example is a very nice illustration for the problem at hand. I think his proposed solution is quite good here. I think though it's not a problem that we need some hacking around to handle this situation. The hack still can be made in a separate commit and documented, which is both sufficient and scalable.

Well, resolving symbol from two different locations is just transition strategy. I don't think we need to call it a hack. It's really sensible solution to the whole bootstrapping problem.
 
I also have to retract my assessment about rarely having a possibility to have a starr as a separate commit. I used to work in an area of the compiler that was in constant flux and that touched a lot of stuff (e.g. classtag refactorings). In that area it was indeed very frequent to have to bend over backwards to perform changes. However nowadays I very infrequently find myself in need of doing non-trivial things wrt starr. It's almost always "change something, commit, rebuild a starr, commit, clean up old stuff, commit".

Cool. Then it seems that sticking to milestones should work. You just need to delay cleanups a little bit but I believe this is ok for other benefits we get from that process.

Grzegorz Kossakowski

unread,
Jun 18, 2013, 2:22:34 PM6/18/13
to scala-internals
On 18 June 2013 14:18, Eugene Burmako <eugene....@epfl.ch> wrote:
Quasiquotes are the ultimate way to work with trees. The readability difference between manual construction (and deconstruction) and quasiquotes is humongous.

The quest of improving the quality of our codebase has been very difficult. Even tiny improvements have been achieved with significant effort. Quasiquotes offer something really big in this area, therefore I think it's useful to think twice about being overly conservative.

Let's discuss this once we have quasiqoutes working and stick to STARR issues in this thread.

From what I see the only difficulty related introducing quasiqoutes in the compiler imposed by switching to tagged version of the compiler for STARR is potential delay by 1-2 months from merging them to our code base and starting to use them in the compiler.

Eugene Burmako

unread,
Jun 18, 2013, 2:24:50 PM6/18/13
to scala-internals
By the way could we also please explicitly hear the benefits of the
proposed change?

And one more thing. What are the new defaults for PR validation? Is
this something that's being proposed, or it's already been implemented
and pushed?

On Jun 18, 7:51 pm, Grzegorz Kossakowski
<grzegorz.kossakow...@gmail.com> wrote:
> Hi,
>
> I just wanted to draw attention of everybody to PR
> 2658<https://github.com/scala/scala/pull/2658>
> .
>
> Adriaan proposes that we switch to using released, tagged versions of the
> compiler for STARR. Note that he proposes to keep the old mechanism of
> using arbitrary STARRs around for a while. We are just switching the
> default. The old mechanism would serve as safety net in case we run into a
> case where sticking to released version is not possible. However, the
> intention is to eventually move to tagged versions for STARR completely.
>
> We discussed STARR process in the past in
> this<https://groups.google.com/d/topic/scala-internals/fO8cs9Ladkw/discussion>thread.
> I'd like to draw your attention to the
> post<https://groups.google.com/d/msg/scala-internals/fO8cs9Ladkw/U0MAoxysUH0J>by
> Lukas. He mentions a specific scenario where a new STARR is needed. In
> the discussion he says:
>
> I could hack the compiler to search in both places, but we don't want that
>
> > code to be
> > committed either.
>
> That's probably the bit we need to revisit. I'd argue that having the
> transition handled in that way is probably the best. If we have separate
> commits in the history dedicated to making the compiler to work with both
> old and new version of the library then it's easier to understand the
> transition. At least, I'd be completely ok with such practice and I don't
> consider this to be pollution of the history.
>
> In exchange for a bit more engineering around the transition we get real,
> tagged stable reference compilers that Josh was
> advocating<https://groups.google.com/d/msg/scala-internals/fO8cs9Ladkw/hpiOaxOaciQJ>for.
> I agree with all his points.
>
> In the same thread Eugene
> said<https://groups.google.com/d/msg/scala-internals/fO8cs9Ladkw/FK8JfubW59IJ>
> :
>
> In my experience it's rarely possible to have a starr as a separate commit.
>
> Eugene, could you elaborate? Could you show a scenario where it's strictly
> to not possible to work-around transition problem the way Lukas described:
> search for symbols both in old and new location.
>
> --
> Grzegorz Kossakowski
> Scalac hacker at Typesafe <http://www.typesafe.com/>
> twitter: @gkossakowski <http://twitter.com/gkossakowski>
> github: @gkossakowski <http://github.com/gkossakowski>

Eugene Burmako

unread,
Jun 18, 2013, 2:27:16 PM6/18/13
to scala-internals
Well, we're discussing potential implications of the proposed change,
and quasiquotes outline such implications. I don't think we should
drop this line of conversation just because it's about a yet
unreleased feature.

1-2 months of delay is only one facet. Another one is discovering and
fixing bugs in quasiquotes while migrating the compiler to using them.
Every such bug will impose more delays.

On Jun 18, 8:22 pm, Grzegorz Kossakowski
<grzegorz.kossakow...@gmail.com> wrote:

Grzegorz Kossakowski

unread,
Jun 18, 2013, 2:32:53 PM6/18/13
to scala-internals
On 18 June 2013 14:27, Eugene Burmako <eugene....@epfl.ch> wrote:
Well, we're discussing potential implications of the proposed change,
and quasiquotes outline such implications. I don't think we should
drop this line of conversation just because it's about a yet
unreleased feature.

Sorry, I just meant to drop the argument whether we should switch to quasiquotes or not. We should leave that part for other time. I agree we should take potential implications into account.
 
1-2 months of delay is only one facet. Another one is discovering and
fixing bugs in quasiquotes while migrating the compiler to using them.
Every such bug will impose more delays.

If the risk of having fatal problem like that is high then quasiquotes shouldn't be introduced until the risk is lowered because they may hamper productivity of all the people working on the compiler that might not be interested in debugging issues related to quasiquotes. If the risk is low (we have reasonable confidence that quasiquotes are stable enough) then the delay of 1-2 months for delivering bug fixes shouldn't be a problem IMO.

--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

Grzegorz Kossakowski

unread,
Jun 18, 2013, 2:37:54 PM6/18/13
to scala-internals
On 18 June 2013 14:24, Eugene Burmako <eugene....@epfl.ch> wrote:
By the way could we also please explicitly hear the benefits of the
proposed change?

Josh outlined some of the reasons. It's about being transparent and reproducible. By sticking to tagged version which was built using well-known process (release process) we can be sure that the binaries we use for STARR were correctly built. Also, the problem of having sources for STARR disappears.

Moreover, this process enforces on us more smooth transition strategies when we do refactorings which reduces the risk of breaking Eclipse.
 
And one more thing. What are the new defaults for PR validation? Is
this something that's being proposed, or it's already been implemented
and pushed?

I believe PR 2658 contains the relevant pieces but I'll let Adriaan explain that part.

--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

Eugene Burmako

unread,
Jun 18, 2013, 2:39:33 PM6/18/13
to <scala-internals@googlegroups.com>
It's not uncommon to have bugfixes in the compiler uncover other problems, previously hidden. That doesn't stop us from introducing those bugfixes, right?

As for stability of quasiquotes, I can't say anything right now. The M4 release will give us more information here. At a glance, I think it should be fairly easy to test them quite extensively, just by comparing pre- and post-quasiquote trees emitted by the compiler for, say, the codebase of scalac and all the tests. However, this is specific to quasiquotes and not very relevant to the current discussion, so let's indeed discuss this elsewhere.


Den Sh

unread,
Jun 18, 2013, 2:42:32 PM6/18/13
to scala-i...@googlegroups.com
Here is the reply to the message written by Euegene on github:

> Why is it a bad idea to use them within the compiler? Should we have to wait until 2.12 (6 more months!) to begin simplifying our codebase?

Due to the fact that quasiquotes used to be implemented with quasiquotes I've encountered a few times a situation where I needed to change implementation of some part of quasiquote logic but it wasn't possible without converting all the code to regular ASTs, rebuilding starr, and converting it back. I think that such changes are long gone but you never know.

The second reason is that development of quasiquotes and development of compiler with quasiquotes in parallel isn't compatible with proposed model of rare starr updates. If you use them now in their current form you might encounter corner cases that aren't covered yet. In current model you can quickly fix such an issue in separate commit with updated starr and continue your original development without a problem. On the other hand if you conform to the new model you'll have to wait for next public release to use the fix.

So in the end if you go for rare starr updates you'll have to be sure that quasiquotes are reasonably stable before they can be used in the compiler.

Eugene Burmako

unread,
Jun 18, 2013, 2:43:53 PM6/18/13
to <scala-internals@googlegroups.com>
Well, current starr mechanism look pretty stable to me. After the linked discussion I rethought my approach which was indeed not very careful, and everything was fine and reproducible ever since.

What do you mean by "correctly built"?


--

Grzegorz Kossakowski

unread,
Jun 18, 2013, 2:47:32 PM6/18/13
to scala-internals
On 18 June 2013 14:43, Eugene Burmako <eugene....@epfl.ch> wrote:
Well, current starr mechanism look pretty stable to me. After the linked discussion I rethought my approach which was indeed not very careful, and everything was fine and reproducible ever since.

What do you mean by "correctly built"?

Using the right flags. AFAIU, you can put any jars in lib/ directory and claim that those are the new STARR.

Eugene Burmako

unread,
Jun 18, 2013, 2:52:44 PM6/18/13
to <scala-internals@googlegroups.com>
I see. Why not store those flags in jars and then have the kitty validate them?


Jason Zaugg

unread,
Jun 18, 2013, 2:53:02 PM6/18/13
to scala-i...@googlegroups.com
In case of a real need, we could publish a tagged release between milestones. But that should be the exception rather than the rule. 

-Jason

Grzegorz Kossakowski

unread,
Jun 18, 2013, 2:55:44 PM6/18/13
to scala-internals
On 18 June 2013 14:53, Jason Zaugg <jza...@gmail.com> wrote:

If the risk of having fatal problem like that is high then quasiquotes shouldn't be introduced until the risk is lowered because they may hamper productivity of all the people working on the compiler that might not be interested in debugging issues related to quasiquotes. If the risk is low (we have reasonable confidence that quasiquotes are stable enough) then the delay of 1-2 months for delivering bug fixes shouldn't be a problem IMO.

In case of a real need, we could publish a tagged release between milestones. But that should be the exception rather than the rule. 

I agree. 

Just to remind everybody: we are keeping the old mechanism for deploying STARRs in place. The proposal is to make use of it only when it's absolutely needed. Our hope is that such occasion will occur rather rarely.

Eugene Burmako

unread,
Jun 18, 2013, 3:03:56 PM6/18/13
to <scala-internals@googlegroups.com>
Wait a second. What are the benefits over the current approach? Why migrate from the old mechanism in the first place?


Grzegorz Kossakowski

unread,
Jun 18, 2013, 3:43:13 PM6/18/13
to scala-internals
On 18 June 2013 15:03, Eugene Burmako <eugene....@epfl.ch> wrote:
Wait a second. What are the benefits over the current approach? Why migrate from the old mechanism in the first place?

I thought we discussed this already, no?

Eugene Burmako

unread,
Jun 18, 2013, 3:56:53 PM6/18/13
to <scala-internals@googlegroups.com>
If I'm not mistaken, so far we've seen the arguments of: 1) reproducibility, 2) correct flags, 3) necessity for the sources for starrs, 4) partest (and others to come probably). To me, neither of them seems to be a major issue wrt the current approach.

As I've mentioned, 1 is not a problem anymore (sha of the starr + a dedicated commit in git history is as good for reproducibility as tags, from what I can see), 2 can be enforced with a small tweak to the kitten, 3 is more like a nuisance rather than a blocker, and 4 isn't about starrs (if we modularize something away from the compiler, then naturally that something shouldn't use starrs, because starrs are only for scala-compiler.jar and its dependencies).


Lukas Rytz

unread,
Jun 18, 2013, 4:21:08 PM6/18/13
to scala-i...@googlegroups.com
I have to say I'm also not really convinced by this change, it seems more of a process issue
that should be handled when in reviews.

Adriaan Moors

unread,
Jun 18, 2013, 5:07:26 PM6/18/13
to scala-i...@googlegroups.com
To summarize the advantages of not using a hand-crafted starr:

  - we need to standardize on a compiler version that can compile all the subprojects of the compiler so that we can build these modules separately using this released version (these subprojects will move to their own github project, their own scala-dependent release schedule, their own build,...)
  - not being able to quickly hand-roll a new starr is an advantage: we don't want the compiler code base to move to new features too fast: we want stability
    - quasiquotes are great, but I don't think we should start using them in the compiler right away (it needs to be battle-tested as an experimental feature first, just like any other feature)
  - using a released version of the compiler to build the compiler gets PR validation to the test suite much faster. We'll still test stability, but failures there are rare enough that we could do that only in nightlies

Grzegorz Kossakowski

unread,
Jun 18, 2013, 5:12:46 PM6/18/13
to scala-internals
Also,

I just thought of another argument: milestone are slightly more tested (or have a chance of being more tested) than random snapshot version tagged as new STARR. This is another argument for stability.

Eugene Burmako

unread,
Jun 18, 2013, 5:18:46 PM6/18/13
to <scala-internals@googlegroups.com>
1) How is this different from the current system? Starrs also provide a way to standardize a compiler version, right?
2a) Did we have a lot of occasions in the past when new starrs caused regressions? How significant is that number in comparison with the number of other regressions?
2b) Quasiquotes are just a library, modulo a small pattern matcher hack. How is using quasiquotes different from introducing something new in a standard library and using that something in stdlib or in the compiler itself?
3) From the looks of it, we can retain our current system and skip building locker.


--

Adriaan Moors

unread,
Jun 18, 2013, 5:35:46 PM6/18/13
to scala-i...@googlegroups.com
On Tue, Jun 18, 2013 at 2:18 PM, Eugene Burmako <eugene....@epfl.ch> wrote:
1) How is this different from the current system? Starrs also provide a way to standardize a compiler version, right?
I would hardly call a couple of bash scripts that download an otherwise unused set of jars that were built in an unknown way "standardized".
I'm sure we've all run into our share of download/caching issues with them.
 
2a) Did we have a lot of occasions in the past when new starrs caused regressions? How significant is that number in comparison with the number of other regressions?
It's not related to regressions. It's about purposely slowing the evolution of the compiler -- aka stabilization.
Please see my other advantages below. Here, I'll throw in another one: building in the IDE and other build tools becomes easier because they also don't ship with STARR, but with a released version of the compiler.
 
2b) Quasiquotes are just a library, modulo a small pattern matcher hack. How is using quasiquotes different from introducing something new in a standard library and using that something in stdlib or in the compiler itself?
We also don't switch to new untested libraries lightly.
 
3) From the looks of it, we can retain our current system and skip building locker.
Not without losing the advantages that we're looking for. Listed below.

Eugene Burmako

unread,
Jun 18, 2013, 5:49:29 PM6/18/13
to <scala-internals@googlegroups.com>
1) Understood. I agree that replacing artifactory with maven will noticeably simplify tooling.
2) Fair enough– from the standpoint of slowing down the evolution, this new initiative also makes sense.

Lukas Rytz

unread,
Jun 19, 2013, 1:54:06 AM6/19/13
to scala-i...@googlegroups.com
so for the last point, i'm very surprised that we want to skip locker.

imagine (just imagine!) MG writes a new backend which speeds up the compiler by 10%. to run
the test suite he uses a compiler that was built with the *old* backend - how can he know that
tests could not break when using a compiler that was built with the new one?


--

Adriaan Moors

unread,
Jun 19, 2013, 2:07:01 AM6/19/13
to scala-i...@googlegroups.com
That's an excellent point. I guess we should keep locker. My main point was about only using released starrs.
Skipping locker was a nice bonus to speed up PR validation.

It still seems unlikely you could have a compiler that passes the stability test but fails the test suite when bootstrapped.

Adriaan Moors

unread,
Jun 19, 2013, 2:14:02 AM6/19/13
to scala-i...@googlegroups.com

On Tue, Jun 18, 2013 at 11:07 PM, Adriaan Moors <adriaa...@typesafe.com> wrote:
It still seems unlikely you could have a compiler that passes the stability test but fails the test suite when bootstrapped.

Since this is so mind boggling, let me make this more explicit and see if it actually makes sense.
Using locker to compile the test suite checks that the codegen of the new compiler results in a correct compiler.
Stability checks that compiling the compiler is idempotent under the codegen of the new compiler.
If the new compiler generates faulty code, running it again seems unlikely to generate the same code.

Josh Suereth

unread,
Jun 19, 2013, 6:02:14 AM6/19/13
to scala-internals

If the compiler is unstable, you'll get runtime issues popping up.   I'm with lukas, but I also think having unstable bytecode may cause erroneous failures too.

I.e. using the new compiler that is unable to generate byte code that links to itself at runtime.

--

Lex Spoon

unread,
Jun 19, 2013, 10:06:03 AM6/19/13
to scala-i...@googlegroups.com
One benefit to add to the list is that it enables the use of
third-party tools such as the Eclipse plugin and, oh, say, Semmle. If
you upgrade STARR rapidly, then your tools will forever be breaking on
you, because the version of Scala they support will be forever
different from the one the compiler is written in.

On the issue of repeatability, the only reliable thing to do is to use
a continuous builder. Yes, a development group can cut all sorts of
corners and still produce some semblance of halfway decent software.
If you want to do it right, though, then any binary that feeds into
deployment should be made on the builder. Any manual process adds a
number of variables.

The issue of dogfooding is important and has many facets. Let me add
just one observation: an internal Scala developer will have much
higher impact if you use the new feature on something outside of the
compiler. You don't want to sink months into refactoring the compiler
in an observationally equivalent way. If possible, it's better to work
on a new library or tool.

Whatever is decided on those two questions, there are two points in
the referenced pull that are worth highlighting on their own. These
are both problems that have festered for 7-8 years.

1. Please do skip a layer of compilation in the build. It's slow and
has no benefit I can see. If the stability test has to do an extra
layer of builds, then so be it.

2. Drop the locker cache. You should only cache things in a build if
you know that recomputing it the slow way would yield the same
results. With the locker, we all know this is the opposite of the
truth: we have all unbroken a build by nuking locker and starting over
from scratch. That much is already a waste of time, but there's an
additional problem that you can't trust your test results if you can't
trust your build results. It's better to eliminate the variable.

Lex

Eugene Burmako

unread,
Jun 19, 2013, 10:09:41 AM6/19/13
to <scala-internals@googlegroups.com>
>> You don't want to sink months into refactoring the compiler in an observationally equivalent way. If possible, it's better to work on a new library or tool.

Could you please elaborate on that?



Lex

Josh Suereth

unread,
Jun 19, 2013, 10:38:09 AM6/19/13
to scala-internals
I still don't understand how everyone thinks that locker -> quick is an unnecessary addition.   How can we even begin to test the new compiler unless it's stable?   UNLESS we *at the minimum* recompile the scala-library, then the new compiler can (and does in practice) generate bytecode which is incompatible with its own library, causing runtime explosions.  This is the exact issue why the incremental compiler is unable to run on locker, and nothing I've seen proposed actually solves that issue.

The last time I saw this happen was during value class refactorings during the 2.10 series.

I'm surprised you haven't seen this issue, as I've run into it several times now.   It's the main reason the incremental compiler in sbt is continually breaking on the scala build.   Once the compiler goes unstable, you break most things, including partest tests.   Only unit tests (which are hopefully being added soon) remain something that provides valuable "looks ok" input.

- Josh


Lex Spoon

unread,
Jun 24, 2013, 10:00:28 AM6/24/13
to scala-i...@googlegroups.com
On Wed, Jun 19, 2013 at 10:09 AM, Eugene Burmako <eugene....@epfl.ch> wrote:
>>> You don't want to sink months into refactoring the compiler in an
>>> observationally equivalent way. If possible, it's better to work on a new
>>> library or tool.
>
> Could you please elaborate on that?

I'm thinking about resources expended to for various kinds of gain.
Staff time is a resource, one that it's hard to expand. The notion of
"gain" is intentionally vague, but I believe a large array of goals
would lead one to the conclusion that you quoted.

Staff time is a resource, and six months of staff time is a gigantic
resource. You can get tremendous gains from six months of effort from
a good staff member. For example, that's enough time to implement a
new serialization system, using the trendy new static code generation
that many of them use. That's enough time to develop a dependency
injection system that, again, uses the trendy newer approach of code
generation. It's enough time for an RPC system, or a foreign-function
interface. It's enough time to build a Scala interface layer to almost
any web service you can think of.

To contrast, simply swapping out the existing Scala compiler for one
that has better internals does not advance all that many goals. It
doesn't make Scala developers more productive, because they just use
the compiler from the outside. It doesn't help Scala get adopted in
more contexts, for the same reason.

To attach this back to the main subject of the thread, recall that one
objection to using a stable release as STARR is that doing so will
make it impossible to try out new features on the compiler itself.
There are other ways to dogfood, though. To overstate the point,
nobody should want to be that guy that develops a new PL and then only
ever uses it to implement the PL.

Lex

Lex Spoon

unread,
Jun 24, 2013, 11:59:50 AM6/24/13
to scala-i...@googlegroups.com
On Wed, Jun 19, 2013 at 10:38 AM, Josh Suereth <joshua....@gmail.com> wrote:
> I still don't understand how everyone thinks that locker -> quick is an
> unnecessary addition. How can we even begin to test the new compiler
> unless it's stable? UNLESS we *at the minimum* recompile the
> scala-library, then the new compiler can (and does in practice) generate
> bytecode which is incompatible with its own library, causing runtime
> explosions.

It might help to think about compilers that are *not* written in their
own language. They don't even have the option to do recursive compiles
of themselves, and so they don't. It works out fine.

Scala is written in itself, so it can have a two layer locker/quick
build if it is helpful. However, I do not see how it helps. As you
say, you need to compile both the compiler and the library to get a
useful combination. However, what is to stop you from building each of
them with STARR? I have done so routinely, and I know by observation
that I'm not the only one.

I don't know what the current Scala developers want from the locker
build. The original author of the feature gave me two justifications,
but both seem to fall down quickly when you think about them. Thus I
suspect it was more of an exercise in learning what you can do with
Ant. That's valuable, but at the same time, we are talking about
tooling that affects internal Scala developers on a daily basis.

The first justification is that it helps with the stability test. As a
historical note, the stability test used to run on every single build,
which required having three compiles of the compiler get built. Later
it was modified to be an optional side target, and the locker build
was modified to be cached. At this point I would claim the locker
build was completely vestigial. Among other things, if you want to
actually run a reliable stability test, you need to first delete the
locker!

The second justification I've been given is that the locker build
allows writing a new feature and then immediately using it. However,
this is also false on the face of it. If you use a new feature in the
compiler, then you already need STARR to be updated. If STARR is
updated, then you don't need an intermediate locker build after all.

Josh, you raise the issue of SBT incremental compiles. Please take
another look at my first paragraph on this thread:

"One benefit to add to the list is that it enables the use of
third-party tools such as the Eclipse plugin and, oh, say, Semmle. If
you upgrade STARR rapidly, then your tools will forever be breaking on
you...."

While that paragraph is about fast-changing STARRs, the point is more
forceful for builds that use a local locker. Wouldn't SBT have an
easier time of things to the extent it uses known quantities for its
compiler? Ideally a Scala release, but as a second best, a STARR
compiler that has gone through *some* degree of vetting. If you just
build a locker locally and then use it, then yes, you should expect
everything to be crashy. So why do that?

Lex Spoon

Adriaan Moors

unread,
Jun 24, 2013, 9:18:15 PM6/24/13
to scala-i...@googlegroups.com
Thanks for the detailed argument, Lex. I agree completely.


Grzegorz Kossakowski

unread,
Jun 24, 2013, 10:00:54 PM6/24/13
to scala-internals
Hi Lex,

Thanks for the analysis. It sounds convincing to me.

I think we should get rid of locker in master branch.

Seth Tisue

unread,
Jun 25, 2013, 1:31:22 PM6/25/13
to scala-i...@googlegroups.com
On Monday, June 24, 2013 10:00:28 AM UTC-4, lexspoon wrote:
simply swapping out the existing Scala compiler for one
that has better internals does not advance all that many goals.

If the existing compiler was already satisfactory, then I'd agree.

But it isn't satisfactory. "Slow and buggy" — that's Rod Johnson's description of the Scala compiler in his Scala Days keynote. (And he's not just some grouchy compiler hacker...)

Slowness and bugginess are systemic issues. They can't be fixed by tweaking and patching. "Better internals" are exactly what's needed in order to address them.

Case in point: the new pattern matcher in Scala 2.10. Pervasive quality issues, slain — by better internals.

Adriaan Moors

unread,
Jun 25, 2013, 1:44:01 PM6/25/13
to scala-i...@googlegroups.com
On Tue, Jun 25, 2013 at 10:31 AM, Seth Tisue <se...@tisue.net> wrote:
On Monday, June 24, 2013 10:00:28 AM UTC-4, lexspoon wrote:
simply swapping out the existing Scala compiler for one
that has better internals does not advance all that many goals.

If the existing compiler was already satisfactory, then I'd agree.

But it isn't satisfactory. "Slow and buggy" — that's Rod Johnson's description of the Scala compiler in his Scala Days keynote. (And he's not just some grouchy compiler hacker...)
I disagree with the "buggy" bit. I think 10% of the users/features encounter/cause 90% of the bugs.
 
Slowness and bugginess are systemic issues. They can't be fixed by tweaking and patching. "Better internals" are exactly what's needed in order to address them.

Case in point: the new pattern matcher in Scala 2.10. Pervasive quality issues, slain — by better internals.
I think it's also a counter-argument: it cost us nearly a person year. We don't have that kind of resources for every bit of the compiler that needs replacing.
There's a huge risk in replacing the internals wholesale beyond just the cost -- how do you know they'll be better?

Seth Tisue

unread,
Jun 25, 2013, 2:18:01 PM6/25/13
to scala-i...@googlegroups.com
On Tue, Jun 25, 2013 at 1:44 PM, Adriaan Moors <adriaa...@typesafe.com> wrote:
I disagree with the "buggy" bit. I think 10% of the users/features encounter/cause 90% of the bugs.

I hope you're right. I'll admit my own perspective on this may be skewed. But that's actually why I quoted Rod. Where's his perception coming from? (probably veering off into scala-debate territory now, though.)
  
I think it's also a counter-argument: it cost us nearly a person year. We don't have that kind of resources for every bit of the compiler that needs replacing.
There's a huge risk in replacing the internals wholesale beyond just the cost -- how do you know they'll be better?

Well, I hope the team wouldn't pick a battle whose outcome was so uncertain.

Hard choices everywhere you look.

Josh Suereth

unread,
Jun 25, 2013, 2:22:43 PM6/25/13
to scala-internals
Lex - I think we may be arguing past each other, to some extent.

There are two things to consider here:

(1) I'm not arguing that locker should be used for common development, only that it HAS TO EXIST for correct testing of Scala.
(2) The longer STARR slides away from mainline compile development, the more instability the compiler could have.  This leads to a different kind of pain then what we've experienced with tooling.


In particular, I agree with your points on tooling, HOWEVER - when adding new language features to Scala, we'd be unable to use them in the standard library.  If our compiler were written in a different language, this would be fine.  It would have its own library to use.  However, if the scala langaue CHANGES, and the library must CHANGE AS WELL, then you need to break the dependency of the compiler on the standard library.   If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need.   We could no longer "unit" test the library without building a compiler.

That may be a better way to structure our own dogfooding.  HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency.   The compiler woudl be written in Scala.Previous and depend on it.  It would have to have its classes isolated from the next scala library, because they would not be binary compatible.


Like I said before, I'm not sure you're seeing all the issues involved here.   While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding. 


SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.

Grzegorz Kossakowski

unread,
Jun 25, 2013, 2:31:31 PM6/25/13
to scala-internals
On 25 June 2013 11:22, Josh Suereth <joshua....@gmail.com> wrote:
Lex - I think we may be arguing past each other, to some extent.

There are two things to consider here:

(1) I'm not arguing that locker should be used for common development, only that it HAS TO EXIST for correct testing of Scala.
(2) The longer STARR slides away from mainline compile development, the more instability the compiler could have.  This leads to a different kind of pain then what we've experienced with tooling.


In particular, I agree with your points on tooling, HOWEVER - when adding new language features to Scala, we'd be unable to use them in the standard library.  If our compiler were written in a different language, this would be fine.  It would have its own library to use.  However, if the scala langaue CHANGES, and the library must CHANGE AS WELL, then you need to break the dependency of the compiler on the standard library.

Josh,

Can you give a specific example of a language change which has to be coordinated with library change?
 
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need.   We could no longer "unit" test the library without building a compiler.

That may be a better way to structure our own dogfooding.  HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency.   The compiler woudl be written in Scala.Previous and depend on it.  It would have to have its classes isolated from the next scala library, because they would not be binary compatible.


Like I said before, I'm not sure you're seeing all the issues involved here.   While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding. 


SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.


I don't follow your argument. It would be great to have an example of specific scenario where locker is needed.

Josh Suereth

unread,
Jun 25, 2013, 2:34:11 PM6/25/13
to scala-internals
On Mon, Jun 24, 2013 at 11:59 AM, Lex Spoon <l...@lexspoon.org> wrote:
On Wed, Jun 19, 2013 at 10:38 AM, Josh Suereth <joshua....@gmail.com> wrote:
> I still don't understand how everyone thinks that locker -> quick is an
> unnecessary addition.   How can we even begin to test the new compiler
> unless it's stable?   UNLESS we *at the minimum* recompile the
> scala-library, then the new compiler can (and does in practice) generate
> bytecode which is incompatible with its own library, causing runtime
> explosions.

It might help to think about compilers that are *not* written in their
own language. They don't even have the option to do recursive compiles
of themselves, and so they don't. It works out fine.

Scala is written in itself, so it can have a two layer locker/quick
build if it is helpful. However, I do not see how it helps. As you
say, you need to compile both the compiler and the library to get a
useful combination. However, what is to stop you from building each of
them with STARR? I have done so routinely, and I know by observation
that I'm not the only one.


Yes. I'm not arguing it's impossible all the time, only there are instances where this is broken and you cannot do so.  THERE, you need the have locker as a fallback.

 
I don't know what the current Scala developers want from the locker
build. The original author of the feature gave me two justifications,
but both seem to fall down quickly when you think about them. Thus I
suspect it was more of an exercise in learning what you can do with
Ant. That's valuable, but at the same time, we are talking about
tooling that affects internal Scala developers on a daily basis.

The first justification is that it helps with the stability test. As a
historical note, the stability test used to run on every single build,
which required having three compiles of the compiler get built. Later
it was modified to be an optional side target, and the locker build
was modified to be cached. At this point I would claim the locker
build was completely vestigial. Among other things, if you want to
actually run a reliable stability test, you need to first delete the
locker!

I'm certainly not arguing that lcoker should remain cached in development, ONLY that when building releases you should definitely ensure locker exists.  You can have mainline development not use it, but I wouldn't trust the scala-library as it stands if there weren't 2 levels of compilation before release (stable -> unstable -> stable).

In this case, all I want from "locker" is a correct build.  Hell, locker should be renamed "quick" and we should have a "release" build and then "strap".  The names, as they are, aren't that great.



 
The second justification I've been given is that the locker build
allows writing a new feature and then immediately using it. However,
this is also false on the face of it. If you use a new feature in the
compiler, then you already need STARR to be updated. If STARR is
updated, then you don't need an intermediate locker build after all.

Josh, you raise the issue of SBT incremental compiles. Please take
another look at my first paragraph on this thread:

"One benefit to add to the list is that it enables the use of
third-party tools such as the Eclipse plugin and, oh, say, Semmle. If
you upgrade STARR rapidly, then your tools will forever be breaking on
you...."


Again, see the other email I wrote on new langauge features.  You want to include these in the standard library and test them.  SO, while we could keep the *COMPILER* against a previous scala version, the tests and standard library would need to make use of these new features, leading to instabilities and broken tooling.  I don't disagree that using tooling for making the compiler is a good thing, but you're always going to break tools at some point.  E.g. Java 1.4 -> 1.5.   IDEALLY we have a good place to do so, and we don't do it often.


 
While that paragraph is about fast-changing STARRs, the point is more
forceful for builds that use a local locker. Wouldn't SBT have an
easier time of things to the extent it uses known quantities for its
compiler? Ideally a Scala release, but as a second best, a STARR
compiler that has gone through *some* degree of vetting. If you just
build a locker locally and then use it, then yes, you should expect
everything to be crashy. So why do that?



SO, again, we may be talking past each other.   For mainline scala development, you dont need locker (most of the time).  HOWEVER -> for creating a release or doing automated builds / continuous integration you need the current "STARR" -> unstable compiler  -> stable compiler process and then running your tests.  It just won't work well otherwise.  I'd love to see proof I'm wrong here, as it would make life a lot easier.

- Josh



Josh Suereth

unread,
Jun 25, 2013, 2:46:45 PM6/25/13
to scala-internals
On Tue, Jun 25, 2013 at 2:31 PM, Grzegorz Kossakowski <grzegorz.k...@gmail.com> wrote:
On 25 June 2013 11:22, Josh Suereth <joshua....@gmail.com> wrote:
Lex - I think we may be arguing past each other, to some extent.

There are two things to consider here:

(1) I'm not arguing that locker should be used for common development, only that it HAS TO EXIST for correct testing of Scala.
(2) The longer STARR slides away from mainline compile development, the more instability the compiler could have.  This leads to a different kind of pain then what we've experienced with tooling.


In particular, I agree with your points on tooling, HOWEVER - when adding new language features to Scala, we'd be unable to use them in the standard library.  If our compiler were written in a different language, this would be fine.  It would have its own library to use.  However, if the scala langaue CHANGES, and the library must CHANGE AS WELL, then you need to break the dependency of the compiler on the standard library.

Josh,

Can you give a specific example of a language change which has to be coordinated with library change?
 

value classes.

ANYTHING that changes pickling formats.  

 
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need.   We could no longer "unit" test the library without building a compiler.

That may be a better way to structure our own dogfooding.  HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency.   The compiler woudl be written in Scala.Previous and depend on it.  It would have to have its classes isolated from the next scala library, because they would not be binary compatible.


Like I said before, I'm not sure you're seeing all the issues involved here.   While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding. 


SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.


I don't follow your argument. It would be great to have an example of specific scenario where locker is needed.


Let's say we come up with a new way to compile specialized code.   I create a new compiler and standard library compiled using STARR.   The new library/compile (we'll call it quick-unstable) links against itself just fine.  However, if I compile something using quick-unstable, it will not link against the quick-unstable library, because the STARR compiler generated code using the OLD specialization format and the new compiler uses the NEW format.   the ONLY thing I can do when such an instability happenes is take "quick-unstable" and compile a new standard library (let's call it "quick-stable").   NOW -> I can use the quick-unstable COMPILER to compile things against the quick-stable LIBRARY.  This entails some classloader separation, and possibly some detangling in the scala compiler.   


A diagram:

(STARR) ----compiles---->   scala-library-quick-unstable    (binary format STARR)
(STARR) ----compiles---->   scala-compiler-quick-unstable    (binary format STARR)
(STARR) ----compiles---->   scala-library-quick-unstable    (binary format STARR)
(quick-unstable) ----compiles---->   a test class    (binary format quick-unstable)

"A test class" cannot link against "scala-library-quick-unstable" currently.  This is exactly the issue we have with the sbt incremental compiler, and nothing stated solves it (sorry Lex, but you'd still be unable to use the incremental compiler on the new thing).


Basically the gist is that you cannot use the scala-library associated with a scala-compiler because that compiler is "unstable".   You need to generate a stable library before you can continue compiling projects.   You do not *need* to recompile the COMPILER for "quick-stable", but if you do, you can just release 2 artifacts:  scala-library-quick-stable and scala-compiler-quick-stable vs. having to release scala-library-quick-stable and scala-compiler-quick-unstable+scala-library-quick-unstable.


What I'd suggest:


(STARR) -- compiles -> quick-unstable
** For Development
   - all projects/tests run off quick-unstable for the cases where the binary format has not changed
** For Automated builds/Continuous integration
   -- quick-unstable -- compiles -> releasa-stable
   -- All projects/tests run off release-stable
   -- Releases/deployments made off release-stable.



SO, the new (or renamed) phases of compilation:

* STARR - stable reference of previous scala version
* QUICK - an (potentially unstable) version of the scala compiled against the current sources
* RELEASE - a (guaranteed stable) version of scala compiled against the current sources
* STRAPP  - a version of scala compiled using RELEASE used to validate that Scala compilation is indeed stable


We should NEVER have a cached version (like locker).   Just run development off of quick, and verify stability in the CI infrastructure.


- Josh








Grzegorz Kossakowski

unread,
Jun 25, 2013, 2:55:47 PM6/25/13
to scala-internals
On 25 June 2013 11:46, Josh Suereth <joshua....@gmail.com> wrote:

Josh,

Can you give a specific example of a language change which has to be coordinated with library change?
 

value classes.

ANYTHING that changes pickling formats.  

If we changed pickling format in backward compatible way so the new compiler understand the old format then we don't have a problem, right?
 

 
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need.   We could no longer "unit" test the library without building a compiler.

That may be a better way to structure our own dogfooding.  HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency.   The compiler woudl be written in Scala.Previous and depend on it.  It would have to have its classes isolated from the next scala library, because they would not be binary compatible.


Like I said before, I'm not sure you're seeing all the issues involved here.   While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding. 


SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.


I don't follow your argument. It would be great to have an example of specific scenario where locker is needed.


Let's say we come up with a new way to compile specialized code.   I create a new compiler and standard library compiled using STARR.   The new library/compile (we'll call it quick-unstable) links against itself just fine.  However, if I compile something using quick-unstable, it will not link against the quick-unstable library, because the STARR compiler generated code using the OLD specialization format and the new compiler uses the NEW format.  

Here you work with the assumption that the old format is not supported in the new compiler. I think if we change that assumption then we never end up in this situation where you have "quick-unstable", right?

That's exactly what I would argue for: you are never allowed to make a change to compiler which is not backward compatible in respect to STARR you are using. Getting rid of locker helps to assure that this is true because you have no infrastructure to develop a feature which breaks backward compatibility.

Josh Suereth

unread,
Jun 25, 2013, 3:04:56 PM6/25/13
to scala-internals
On Tue, Jun 25, 2013 at 2:55 PM, Grzegorz Kossakowski <grzegorz.k...@gmail.com> wrote:
On 25 June 2013 11:46, Josh Suereth <joshua....@gmail.com> wrote:

Josh,

Can you give a specific example of a language change which has to be coordinated with library change?
 

value classes.

ANYTHING that changes pickling formats.  

If we changed pickling format in backward compatible way so the new compiler understand the old format then we don't have a problem, right?
 

Not just understand, but link against.   I.e. you'd have to retain the old mechanism, whatever it is.

 

 
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need.   We could no longer "unit" test the library without building a compiler.

That may be a better way to structure our own dogfooding.  HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency.   The compiler woudl be written in Scala.Previous and depend on it.  It would have to have its classes isolated from the next scala library, because they would not be binary compatible.


Like I said before, I'm not sure you're seeing all the issues involved here.   While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding. 


SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.


I don't follow your argument. It would be great to have an example of specific scenario where locker is needed.


Let's say we come up with a new way to compile specialized code.   I create a new compiler and standard library compiled using STARR.   The new library/compile (we'll call it quick-unstable) links against itself just fine.  However, if I compile something using quick-unstable, it will not link against the quick-unstable library, because the STARR compiler generated code using the OLD specialization format and the new compiler uses the NEW format.  

Here you work with the assumption that the old format is not supported in the new compiler. I think if we change that assumption then we never end up in this situation where you have "quick-unstable", right?


That would be true.   I don't really see scala being able to do that in the near future.  Basically, what we're talking about is cross-major-version binary backward-compatibility of Scala.  Again, when you want to alter how closures are compiled, or how specialization is compiled you now have to ensure you can link against code compiled using the old format.   So far, Scala has done very little work in this regard.   I also think this drastically complicates life for the compiler developer.   

From the user perspective, it sure would be nice.  Are you willing to commit to this?   That's what the suggestion is.   I'd love to see it, but all I've heard for the past few years is that scala is not ready to commit to backward-binary-compatibility across major versions yet.


 
That's exactly what I would argue for: you are never allowed to make a change to compiler which is not backward compatible in respect to STARR you are using. Getting rid of locker helps to assure that this is true because you have no infrastructure to develop a feature which breaks backward compatibility.


Right, removing the recourse would force us to get creative.  It's definitely the way to go if we're willing to commit to it.  But don't tread that ground likely.  Make the decision and don't back out.  It could also slow down any feature (even speed improvements in bytecode).  HOWEVER, It could be worth it.  I know our users would love us if we had backwards-binary compatibiity.  I think our users would also like to see a faster compiler, so let's figure out when the time is right to make that commitement.

- Josh


Grzegorz Kossakowski

unread,
Jun 25, 2013, 3:22:02 PM6/25/13
to scala-internals
On 25 June 2013 12:04, Josh Suereth <joshua....@gmail.com> wrote:

If we changed pickling format in backward compatible way so the new compiler understand the old format then we don't have a problem, right?
 

Not just understand, but link against.   I.e. you'd have to retain the old mechanism, whatever it is.

Correct.
 

That would be true.   I don't really see scala being able to do that in the near future.  Basically, what we're talking about is cross-major-version binary backward-compatibility of Scala.  Again, when you want to alter how closures are compiled, or how specialization is compiled you now have to ensure you can link against code compiled using the old format.   So far, Scala has done very little work in this regard.   I also think this drastically complicates life for the compiler developer.   

Actually, that's not what I proposed. I proposed merely that at any given point in time whatever version of compiler we have in master branch is backward compatible in respect to the current STARR version.

To give a specific example. Let's suppose master is using Scala 2.11.0-M4 as STARR and we develop a new scheme for specialization which is turned on by default in some commit before 2.11.0-M5. That commit has to be made to understand bytecode generated by old scheme (coming from 2.11.0-M4). We release 2.11.0-M5 which emits bytecode using a new scheme but still understands the old one. We switch to 2.11.0-M5 as STARR. Now our STARR emits new scheme. Next commit allows us to get rid of support for the old scheme and only the code for the new scheme survives. Once you release 2.11.0-M6 there will be no sign of the code that supported the old scheme.

As you can see the whole transition happened within development of one major release (during milestone period) and it doesn't introduce cross-major-version backward compatibility requirement. It only introduces transient backward compatibility which is a lot easier to engineer.

If you compare this to our current process the change is that you have to stages of a transition to a new feature as apposed to one. Our current infrastructure allows you to jump through those 2 stages in one leap but I argue that we should drop it and enforce that we always go through two stage process. This will be a first step towards more stable Scala compiler development process.
 
From the user perspective, it sure would be nice.  Are you willing to commit to this?   That's what the suggestion is.   I'd love to see it, but all I've heard for the past few years is that scala is not ready to commit to backward-binary-compatibility across major versions yet.

I agree we are not there yet to deliver exactly that. However, I think this transient backward compatible requirement will serve as a first step toward that goal.

Adriaan Moors

unread,
Jun 25, 2013, 5:41:51 PM6/25/13
to scala-i...@googlegroups.com
I agree with what Greg outlined -- this is what I meant by slowing down compiler changes.

(I'm not against facilities for using a custom built STARR for development -- you can either publish one locally and specify that version as your starr, or we can keep the current approach where you point to any set of jars as your STARR compiler.)

Outside of the development environment, my proposal gives us a synchronization point across our tooling (IDEs, build tools, Scaladoc, Partest,... <-- independent projects effectively built what we now call STARR). It also makes it easier to build scala for contributors, that often only change the library or fix bugs that don't affect code generation/core classes. 

Josh, I don't follow your usage of the word "unstable". There's nothing unstable about a quick that was built with STARR instead of locker. Stability can and will of course be tested, but we can now do so in parallel with the test suite (called "restrap" in the PR).

Here's what I think our PR validation flow should look like, we can only meet these timing constraints by skipping locker:

1. build quick, propagate artifacts downstream (goal: should not take more than 10 min)
2. downstream, as parallel jenkins jobs (goal: no job runs for more than 15 min):
  - run the test suite, possibly different categories (pos/run/junit/...) in parallel 
  - check for stability by verifying the current compiler compiled with a previous compiler generates the same bytecode for a fixed set of projects as the current compiler compiled with the current compiler compiled with a previous compiler
  - check integratation with the IDE, SBT, and everything dbuild can build
  - generate docs
  - assemble a distribution
  - ...



Simon Ochsenreither

unread,
Jun 25, 2013, 6:10:10 PM6/25/13
to scala-i...@googlegroups.com

Where's his perception coming from? (probably veering off into scala-debate territory now, though.

Hahaha. Answering that would probably violate the Code of Conduct. Too bad that only some people are exempt from following it.
Anyway, if pissing off contributors was the purpose of the keynote, he did a pretty great job.



Well, I hope the team wouldn't pick a battle whose outcome was so uncertain.

Imho, one benefit of the recent slimming down of things is that there is less code to maintain, test and fix ... making it probably a bit more likely to succeed in refactoring/radically improving code.

Paolo G. Giarrusso

unread,
Jun 25, 2013, 8:39:19 PM6/25/13
to scala-i...@googlegroups.com, Miguel Garcia
Adriaan,
I disagree somewhat with Greg's proposal, and I believe that I describe below a better idea.

I think "understanding" multiple ABIs can be too complex, and requires to write extra code to be thrown away, wasting developer time. And you can reap most (all?) benefits you describe without such complexity.

# Examples that supporting two ABIs can be harder:

Imagine replacing $plus$ with $plusBippy$ in the encoding of +: right now that means changing one string constant. If you need to link to code using either encoding, now the parser needs to guess during desugaring, or you need to change the compiler architecture to know the ABI of the target library when you do the desugaring.
I assume similar problems are conceivable for more realistic changes. Changing the interface of Function1 sounds even more fun: a value of type Function1 could have either the old or new interface, depending on who created the value; I think that users of the value don't know that.

# But why build locker *all the time*?

I think we (should) need that only for 1% of the commits. So those commits should make it explicit.

# Proposal: When a commit changes the ABI, it should reactivate building through locker by changing a flag in the build. Reviewers can see that and think hard whether that's worth merging. But if it's worth it, the change doesn't get technically harder. *That* commit, once merged, is built through a 3rd build step, like currently with locker. Subsequent ones are also built this way until a new version is tagged and released (like a milestone). Then a script can switch to a new STARR and reset that flag, and those changes can be committed together.

Now, since changing the ABI would be such a big deal, you can also just tag a new version right after. Possibly such a version doesn't follow the standard timing of milestones, but it doesn't need to be a milestone.

# Advantages:
* STARR use tagged releases, with all the advantages
* most builds are faster because locker is not there anymore
* changing the ABI is not technically harder, it's just a more important decision: since it's syntactically visible in the commit, and reviewers can discuss such a change with more care.
You want to poll the whole community for an ABI change? You want to do the opposite and have less review for such a change?
Easily pick your favorite option: just change a configuration file in natural language! It's a... policy document!!!
 
(The only commits when the ABI can change silently are those between breaking the ABI and a new STARR, but I envision none to be there.)

* since locker is only used explicitly, one has to document when it's used, summarizing what Josh explained.

# Disadvantages:

* you should tag a version when the ABI-breaking change is merged, and publish it on Nexus. There's a concern that (intermediate) release happens at the wrong time; thus I do not propose to call it a milestone or announce it to users, it should just be tagged with another suffix, and the tagging scheme should be documented.

# Final notes
Finally, note that the developer actually changing the ABI will in fact need locker to be cached on his computer. Such caching might need to be less automatic though - as in, you might need to type ant buildkeeplocker when you expect that keeping locker is safe. (Alternatively, one might simply build a custom STARR; I think the only difference on this point is what's easier for developers to use, and I suspect caching locker is easier there).

After describing this, for me it's also easier to understand why caching is most often OK: since most commits don't change the ABI, building locker is in fact not needed!

# WDYT?

Paolo

Josh Suereth

unread,
Jun 26, 2013, 9:16:39 AM6/26/13
to scala-internals
On Tue, Jun 25, 2013 at 5:41 PM, Adriaan Moors <adriaa...@typesafe.com> wrote:
I agree with what Greg outlined -- this is what I meant by slowing down compiler changes.

(I'm not against facilities for using a custom built STARR for development -- you can either publish one locally and specify that version as your starr, or we can keep the current approach where you point to any set of jars as your STARR compiler.)

Outside of the development environment, my proposal gives us a synchronization point across our tooling (IDEs, build tools, Scaladoc, Partest,... <-- independent projects effectively built what we now call STARR). It also makes it easier to build scala for contributors, that often only change the library or fix bugs that don't affect code generation/core classes. 

Josh, I don't follow your usage of the word "unstable". There's nothing unstable about a quick that was built with STARR instead of locker. Stability can and will of course be tested, but we can now do so in parallel with the test suite (called "restrap" in the PR).


True, by "unstable" I mean it's "potentially unstable".   There are certain types of changes (I ran into several while developing the sbt build) where the compiler goes unstable.  We've been hiding ourselves from them, but they will rear their heads, and developers will have to go to greater lengths to avoid them in the future.   It's not a bad thing, but we should, as a community, commit to doing this.
 
Here's what I think our PR validation flow should look like, we can only meet these timing constraints by skipping locker:

1. build quick, propagate artifacts downstream (goal: should not take more than 10 min)
2. downstream, as parallel jenkins jobs (goal: no job runs for more than 15 min):
  - run the test suite, possibly different categories (pos/run/junit/...) in parallel 
  - check for stability by verifying the current compiler compiled with a previous compiler generates the same bytecode for a fixed set of projects as the current compiler compiled with the current compiler compiled with a previous compiler
  - check integratation with the IDE, SBT, and everything dbuild can build
  - generate docs
  - assemble a distribution
  - ...

Do you need a distribution for every pull request?  I can understand docs from a "does this crash scaladoc" mindset....

Adriaan Moors

unread,
Jun 26, 2013, 7:21:12 PM6/26/13