I could hack the compiler to search in both places, but we don't want that code to becommitted either.
In my experience it's rarely possible to have a starr as a separate commit.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I'd also like to mention the question of quasiquotes. Prohibiting scalac from being built with transient starrs means that we will significantly slow down adoption of quasiquotes in our codebase.
Yes, Lukas' example is a very nice illustration for the problem at hand. I think his proposed solution is quite good here. I think though it's not a problem that we need some hacking around to handle this situation. The hack still can be made in a separate commit and documented, which is both sufficient and scalable.
I also have to retract my assessment about rarely having a possibility to have a starr as a separate commit. I used to work in an area of the compiler that was in constant flux and that touched a lot of stuff (e.g. classtag refactorings). In that area it was indeed very frequent to have to bend over backwards to perform changes. However nowadays I very infrequently find myself in need of doing non-trivial things wrt starr. It's almost always "change something, commit, rebuild a starr, commit, clean up old stuff, commit".
Quasiquotes are the ultimate way to work with trees. The readability difference between manual construction (and deconstruction) and quasiquotes is humongous.The quest of improving the quality of our codebase has been very difficult. Even tiny improvements have been achieved with significant effort. Quasiquotes offer something really big in this area, therefore I think it's useful to think twice about being overly conservative.
Well, we're discussing potential implications of the proposed change,
and quasiquotes outline such implications. I don't think we should
drop this line of conversation just because it's about a yet
unreleased feature.
1-2 months of delay is only one facet. Another one is discovering and
fixing bugs in quasiquotes while migrating the compiler to using them.
Every such bug will impose more delays.
By the way could we also please explicitly hear the benefits of the
proposed change?
And one more thing. What are the new defaults for PR validation? Is
this something that's being proposed, or it's already been implemented
and pushed?
--
Well, current starr mechanism look pretty stable to me. After the linked discussion I rethought my approach which was indeed not very careful, and everything was fine and reproducible ever since.What do you mean by "correctly built"?
If the risk of having fatal problem like that is high then quasiquotes shouldn't be introduced until the risk is lowered because they may hamper productivity of all the people working on the compiler that might not be interested in debugging issues related to quasiquotes. If the risk is low (we have reasonable confidence that quasiquotes are stable enough) then the delay of 1-2 months for delivering bug fixes shouldn't be a problem IMO.In case of a real need, we could publish a tagged release between milestones. But that should be the exception rather than the rule.
Wait a second. What are the benefits over the current approach? Why migrate from the old mechanism in the first place?
--
1) How is this different from the current system? Starrs also provide a way to standardize a compiler version, right?
2a) Did we have a lot of occasions in the past when new starrs caused regressions? How significant is that number in comparison with the number of other regressions?
2b) Quasiquotes are just a library, modulo a small pattern matcher hack. How is using quasiquotes different from introducing something new in a standard library and using that something in stdlib or in the compiler itself?
3) From the looks of it, we can retain our current system and skip building locker.
--
It still seems unlikely you could have a compiler that passes the stability test but fails the test suite when bootstrapped.
If the compiler is unstable, you'll get runtime issues popping up. I'm with lukas, but I also think having unstable bytecode may cause erroneous failures too.
I.e. using the new compiler that is unable to generate byte code that links to itself at runtime.
--
Lex
simply swapping out the existing Scala compiler for one
that has better internals does not advance all that many goals.
On Monday, June 24, 2013 10:00:28 AM UTC-4, lexspoon wrote:simply swapping out the existing Scala compiler for one
that has better internals does not advance all that many goals.If the existing compiler was already satisfactory, then I'd agree.But it isn't satisfactory. "Slow and buggy" — that's Rod Johnson's description of the Scala compiler in his Scala Days keynote. (And he's not just some grouchy compiler hacker...)
Slowness and bugginess are systemic issues. They can't be fixed by tweaking and patching. "Better internals" are exactly what's needed in order to address them.Case in point: the new pattern matcher in Scala 2.10. Pervasive quality issues, slain — by better internals.
I disagree with the "buggy" bit. I think 10% of the users/features encounter/cause 90% of the bugs.
I think it's also a counter-argument: it cost us nearly a person year. We don't have that kind of resources for every bit of the compiler that needs replacing.There's a huge risk in replacing the internals wholesale beyond just the cost -- how do you know they'll be better?
Lex - I think we may be arguing past each other, to some extent.There are two things to consider here:(1) I'm not arguing that locker should be used for common development, only that it HAS TO EXIST for correct testing of Scala.(2) The longer STARR slides away from mainline compile development, the more instability the compiler could have. This leads to a different kind of pain then what we've experienced with tooling.In particular, I agree with your points on tooling, HOWEVER - when adding new language features to Scala, we'd be unable to use them in the standard library. If our compiler were written in a different language, this would be fine. It would have its own library to use. However, if the scala langaue CHANGES, and the library must CHANGE AS WELL, then you need to break the dependency of the compiler on the standard library.
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need. We could no longer "unit" test the library without building a compiler.That may be a better way to structure our own dogfooding. HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency. The compiler woudl be written in Scala.Previous and depend on it. It would have to have its classes isolated from the next scala library, because they would not be binary compatible.Like I said before, I'm not sure you're seeing all the issues involved here. While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding.SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.
On Wed, Jun 19, 2013 at 10:38 AM, Josh Suereth <joshua....@gmail.com> wrote:It might help to think about compilers that are *not* written in their
> I still don't understand how everyone thinks that locker -> quick is an
> unnecessary addition. How can we even begin to test the new compiler
> unless it's stable? UNLESS we *at the minimum* recompile the
> scala-library, then the new compiler can (and does in practice) generate
> bytecode which is incompatible with its own library, causing runtime
> explosions.
own language. They don't even have the option to do recursive compiles
of themselves, and so they don't. It works out fine.
Scala is written in itself, so it can have a two layer locker/quick
build if it is helpful. However, I do not see how it helps. As you
say, you need to compile both the compiler and the library to get a
useful combination. However, what is to stop you from building each of
them with STARR? I have done so routinely, and I know by observation
that I'm not the only one.
I don't know what the current Scala developers want from the locker
build. The original author of the feature gave me two justifications,
but both seem to fall down quickly when you think about them. Thus I
suspect it was more of an exercise in learning what you can do with
Ant. That's valuable, but at the same time, we are talking about
tooling that affects internal Scala developers on a daily basis.
The first justification is that it helps with the stability test. As a
historical note, the stability test used to run on every single build,
which required having three compiles of the compiler get built. Later
it was modified to be an optional side target, and the locker build
was modified to be cached. At this point I would claim the locker
build was completely vestigial. Among other things, if you want to
actually run a reliable stability test, you need to first delete the
locker!
The second justification I've been given is that the locker build
allows writing a new feature and then immediately using it. However,
this is also false on the face of it. If you use a new feature in the
compiler, then you already need STARR to be updated. If STARR is
updated, then you don't need an intermediate locker build after all.
Josh, you raise the issue of SBT incremental compiles. Please take
another look at my first paragraph on this thread:
you...."
"One benefit to add to the list is that it enables the use of
third-party tools such as the Eclipse plugin and, oh, say, Semmle. If
you upgrade STARR rapidly, then your tools will forever be breaking on
While that paragraph is about fast-changing STARRs, the point is more
forceful for builds that use a local locker. Wouldn't SBT have an
easier time of things to the extent it uses known quantities for its
compiler? Ideally a Scala release, but as a second best, a STARR
compiler that has gone through *some* degree of vetting. If you just
build a locker locally and then use it, then yes, you should expect
everything to be crashy. So why do that?
On 25 June 2013 11:22, Josh Suereth <joshua....@gmail.com> wrote:
Lex - I think we may be arguing past each other, to some extent.
There are two things to consider here:(1) I'm not arguing that locker should be used for common development, only that it HAS TO EXIST for correct testing of Scala.(2) The longer STARR slides away from mainline compile development, the more instability the compiler could have. This leads to a different kind of pain then what we've experienced with tooling.In particular, I agree with your points on tooling, HOWEVER - when adding new language features to Scala, we'd be unable to use them in the standard library. If our compiler were written in a different language, this would be fine. It would have its own library to use. However, if the scala langaue CHANGES, and the library must CHANGE AS WELL, then you need to break the dependency of the compiler on the standard library.Josh,Can you give a specific example of a language change which has to be coordinated with library change?
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need. We could no longer "unit" test the library without building a compiler.That may be a better way to structure our own dogfooding. HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency. The compiler woudl be written in Scala.Previous and depend on it. It would have to have its classes isolated from the next scala library, because they would not be binary compatible.Like I said before, I'm not sure you're seeing all the issues involved here. While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding.SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.I don't follow your argument. It would be great to have an example of specific scenario where locker is needed.
Josh,Can you give a specific example of a language change which has to be coordinated with library change?value classes.ANYTHING that changes pickling formats.
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need. We could no longer "unit" test the library without building a compiler.That may be a better way to structure our own dogfooding. HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency. The compiler woudl be written in Scala.Previous and depend on it. It would have to have its classes isolated from the next scala library, because they would not be binary compatible.Like I said before, I'm not sure you're seeing all the issues involved here. While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding.SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.I don't follow your argument. It would be great to have an example of specific scenario where locker is needed.Let's say we come up with a new way to compile specialized code. I create a new compiler and standard library compiled using STARR. The new library/compile (we'll call it quick-unstable) links against itself just fine. However, if I compile something using quick-unstable, it will not link against the quick-unstable library, because the STARR compiler generated code using the OLD specialization format and the new compiler uses the NEW format.
On 25 June 2013 11:46, Josh Suereth <joshua....@gmail.com> wrote:
Josh,Can you give a specific example of a language change which has to be coordinated with library change?value classes.ANYTHING that changes pickling formats.If we changed pickling format in backward compatible way so the new compiler understand the old format then we don't have a problem, right?
If you would be willing to do this, then I accept that we can remove locker and remain stable and have all the toolign support we need. We could no longer "unit" test the library without building a compiler.That may be a better way to structure our own dogfooding. HOWEVER you *NEED* to go to the extreme of breaking the compiler->library dependency. The compiler woudl be written in Scala.Previous and depend on it. It would have to have its classes isolated from the next scala library, because they would not be binary compatible.Like I said before, I'm not sure you're seeing all the issues involved here. While a lot of the time the compiler remains stable, so skipping locker or building off of STARR is ok, there are times when it is not, and those times happen often enough that you MUST keep the current chain of events to continue the current style of dogfooding.SO unless we do such an aggressive decoupling, I really don't see how what you suggest works in practice given that state of things now.I don't follow your argument. It would be great to have an example of specific scenario where locker is needed.Let's say we come up with a new way to compile specialized code. I create a new compiler and standard library compiled using STARR. The new library/compile (we'll call it quick-unstable) links against itself just fine. However, if I compile something using quick-unstable, it will not link against the quick-unstable library, because the STARR compiler generated code using the OLD specialization format and the new compiler uses the NEW format.Here you work with the assumption that the old format is not supported in the new compiler. I think if we change that assumption then we never end up in this situation where you have "quick-unstable", right?
That's exactly what I would argue for: you are never allowed to make a change to compiler which is not backward compatible in respect to STARR you are using. Getting rid of locker helps to assure that this is true because you have no infrastructure to develop a feature which breaks backward compatibility.
If we changed pickling format in backward compatible way so the new compiler understand the old format then we don't have a problem, right?Not just understand, but link against. I.e. you'd have to retain the old mechanism, whatever it is.
That would be true. I don't really see scala being able to do that in the near future. Basically, what we're talking about is cross-major-version binary backward-compatibility of Scala. Again, when you want to alter how closures are compiled, or how specialization is compiled you now have to ensure you can link against code compiled using the old format. So far, Scala has done very little work in this regard. I also think this drastically complicates life for the compiler developer.
From the user perspective, it sure would be nice. Are you willing to commit to this? That's what the suggestion is. I'd love to see it, but all I've heard for the past few years is that scala is not ready to commit to backward-binary-compatibility across major versions yet.
Where's his perception coming from? (probably veering off into scala-debate territory now, though.
Well, I hope the team wouldn't pick a battle whose outcome was so uncertain.
I agree with what Greg outlined -- this is what I meant by slowing down compiler changes.(I'm not against facilities for using a custom built STARR for development -- you can either publish one locally and specify that version as your starr, or we can keep the current approach where you point to any set of jars as your STARR compiler.)
Outside of the development environment, my proposal gives us a synchronization point across our tooling (IDEs, build tools, Scaladoc, Partest,... <-- independent projects effectively built what we now call STARR). It also makes it easier to build scala for contributors, that often only change the library or fix bugs that don't affect code generation/core classes.
Josh, I don't follow your usage of the word "unstable". There's nothing unstable about a quick that was built with STARR instead of locker. Stability can and will of course be tested, but we can now do so in parallel with the test suite (called "restrap" in the PR).
Here's what I think our PR validation flow should look like, we can only meet these timing constraints by skipping locker:1. build quick, propagate artifacts downstream (goal: should not take more than 10 min)2. downstream, as parallel jenkins jobs (goal: no job runs for more than 15 min):- run the test suite, possibly different categories (pos/run/junit/...) in parallel- check for stability by verifying the current compiler compiled with a previous compiler generates the same bytecode for a fixed set of projects as the current compiler compiled with the current compiler compiled with a previous compiler- check integratation with the IDE, SBT, and everything dbuild can build- generate docs- assemble a distribution- ...
On Tue, Jun 25, 2013 at 5:41 PM, Adriaan Moors <adriaa...@typesafe.com> wrote:
I agree with what Greg outlined -- this is what I meant by slowing down compiler changes.(I'm not against facilities for using a custom built STARR for development -- you can either publish one locally and specify that version as your starr, or we can keep the current approach where you point to any set of jars as your STARR compiler.)
Outside of the development environment, my proposal gives us a synchronization point across our tooling (IDEs, build tools, Scaladoc, Partest,... <-- independent projects effectively built what we now call STARR). It also makes it easier to build scala for contributors, that often only change the library or fix bugs that don't affect code generation/core classes.
Josh, I don't follow your usage of the word "unstable". There's nothing unstable about a quick that was built with STARR instead of locker. Stability can and will of course be tested, but we can now do so in parallel with the test suite (called "restrap" in the PR).True, by "unstable" I mean it's "potentially unstable". There are certain types of changes (I ran into several while developing the sbt build) where the compiler goes unstable. We've been hiding ourselves from them, but they will rear their heads, and developers will have to go to greater lengths to avoid them in the future. It's not a bad thing, but we should, as a community, commit to doing this.
Here's what I think our PR validation flow should look like, we can only meet these timing constraints by skipping locker:1. build quick, propagate artifacts downstream (goal: should not take more than 10 min)2. downstream, as parallel jenkins jobs (goal: no job runs for more than 15 min):- run the test suite, possibly different categories (pos/run/junit/...) in parallel- check for stability by verifying the current compiler compiled with a previous compiler generates the same bytecode for a fixed set of projects as the current compiler compiled with the current compiler compiled with a previous compiler- check integratation with the IDE, SBT, and everything dbuild can build- generate docs- assemble a distribution- ...Do you need a distribution for every pull request? I can understand docs from a "does this crash scaladoc" mindset....