widening, pseudo-widening, inferring, weak lubbing, and kickball captains

253 views
Skip to first unread message

Paul Phillips

unread,
Jun 10, 2013, 7:17:05 PM6/10/13
to scala-i...@googlegroups.com
There is conflation of some quite different things taking place in the thread about widening, and I would like to disentangle it. Things being conflated include 1) widening of literals vs. widening of typed values, 2) what it would mean not to widen (it DOESN'T have to mean, nor should it mean, that you start inferring List[Any] or List[AnyVal] all over the place) and 3) genuine "widening" (e.g. Byte -> Int) vs. lossy "pseudo-widening" (e.g. Long -> Double/Float).

With regard to literals:

def f1(x: Long, y: Double) = List(1l, 2.0)
def f2(x: Long, y: Double) = List(x, y)

As far as I am concerned there is no compelling reason for the type inferencer to be restricted when the value is a literal and it can be represented exactly in the type being ascribed to it. It should feel free to infer List[Double] for f1.

f2 is the problem. I propose that if you desperately need f2 to compile without alteration, you are living too dangerously and we should not help you. If it were up to me, f2 wouldn't compile at all. No inferring AnyVal or Any, it is a straight compile-time error. Here are several ways you could write it so it would compile, depending on what you wanted to emerge from the cloud of dust:

def f2(x: Long, y: Double) = List[Any](x, y)
def f2(x: Long, y: Double) = List[AnyVal](x, y)
def f2(x: Long, y: Double) = List(x.toDouble, y)

If you are in the throes of the hobgoblins of foolish consistency, perhaps you object to List(q1, q2) ever not compiling, because there's always a type which COULD be inferred; at worst, it is a List[Any]. I don't share this position because I think type systems should serve us, not the other way around.

The other thing is that there is zero reason why any of this has to break all the code in the world. We manage to evolve everything else somehow. The small step is to make it possible for people to easily opt out of these unsafe conversions; the larger one is to make people who want unsafe conversions opt into them; in neither case does it break all the code in the world any more than any other change we make. Try to compile some scala 2.7 code with 2.11. You don't get a lifetime guarantee in this business.

Independently of the positions being offered here, Simon is a lot more convincing because he has to come up with real reasons. Those for the status quo have the luxury of saying "that's how it is" and indeed that's how it is. But there's no reason anyone should walk away from this thinking that's how it has to be. It doesn't. It wouldn't even be difficult to evolve. The difficult part is overcoming the obstructionism; the really difficult part is that the status quo can win any argument simply by withdrawing attention.

If we were picking kickball teams, "inertia" would be my first pick, every time. Have to get that guy on my side.

Paolo G. Giarrusso

unread,
Jun 11, 2013, 8:42:17 PM6/11/13
to scala-i...@googlegroups.com
On Tuesday, June 11, 2013 1:17:05 AM UTC+2, Paul Phillips wrote:
There is conflation of some quite different things taking place in the thread about widening, and I would like to disentangle it. Things being conflated include 1) widening of literals vs. widening of typed values, 2) what it would mean not to widen (it DOESN'T have to mean, nor should it mean, that you start inferring List[Any] or List[AnyVal] all over the place) and 3) genuine "widening" (e.g. Byte -> Int) vs. lossy "pseudo-widening" (e.g. Long -> Double/Float).

With regard to literals:

def f1(x: Long, y: Double) = List(1l, 2.0)
def f2(x: Long, y: Double) = List(x, y)

As far as I am concerned there is no compelling reason for the type inferencer to be restricted when the value is a literal and it can be represented exactly in the type being ascribed to it. It should feel free to infer List[Double] for f1.

f2 is the problem. I propose that if you desperately need f2 to compile without alteration, you are living too dangerously and we should not help you. If it were up to me, f2 wouldn't compile at all. No inferring AnyVal or Any, it is a straight compile-time error. Here are several ways you could write it so it would compile, depending on what you wanted to emerge from the cloud of dust:

def f2(x: Long, y: Double) = List[Any](x, y)
def f2(x: Long, y: Double) = List[AnyVal](x, y)
def f2(x: Long, y: Double) = List(x.toDouble, y)
 
Maybe you can merge a -Y option for that? I'd propose making the error you showed in the other thread a bit more explicit (Why is inferring "Any" forbidden?), but that's less relevant if that's not the default behavior.

If you are in the throes of the hobgoblins of foolish consistency, perhaps you object to List(q1, q2) ever not compiling, because there's always a type which COULD be inferred; at worst, it is a List[Any]. I don't share this position because
 
You say:

I think type systems should serve us, not the other way around.

While that's nice, you can't always have that. The first thing about type systems is that they reject valid programs, so if you use a type system you have to conform to its rules. Hence the question is about the tradeoff.

The other thing is that there is zero reason why any of this has to break all the code in the world. We manage to evolve everything else somehow. The small step is to make it possible for people to easily opt out of these unsafe conversions; the larger one is to make people who want unsafe conversions opt into them; in neither case does it break all the code in the world any more than any other change we make. Try to compile some scala 2.7 code with 2.11. You don't get a lifetime guarantee in this business.

Independently of the positions being offered here, Simon is a lot more convincing because he has to come up with real reasons. Those for the status quo have the luxury of saying "that's how it is" and indeed that's how it is. But there's no reason anyone should walk away from this thinking that's how it has to be. It doesn't. It wouldn't even be difficult to evolve. The difficult part is overcoming the obstructionism; the really difficult part is that the status quo can win any argument simply by withdrawing attention.

If we were picking kickball teams, "inertia" would be my first pick, every time. Have to get that guy on my side.

You like soundness? Well, soundness second name is inertia. If we had a soundness proof for Scala and wanted to keep it working, many less things could be changed without first checking the proofs. Similarly for other proofs of useful properties. Typical languages don't have soundness proofs for everything, but you still need to be careful.

This proposal makes it more complex to talk about *correctness* (or in particular, completeness) of type inference - it makes things worse under the standard criterion "type inference infers types if it is possible to do so".

And I guess that Martin wants to study such issues (for Scala 3). So, can you define a completeness criterion where this is not a regression and which is no more complex?
Martin wrote:

To stay sane with type inference (and I am not saying we are there yet!) you have to ruthlessly simplify. There's a constant tension between keeping things clear and catering to engineering concerns. I believe Scala has erred a bit too much towards catering to engineering concerns, at the expense of greatly complicating its model of types and type inference. Right now, I would hope the pendulum can swing a bit in the other direction, towards clear foundations supported by a faithful compiler.

And in general, I see Martin's point. And as far as I can tell, both you and Martin want a real specification of Scala and a compiler actually implementing it. He just wants to get there by simplifying Scala. If this succeeds, maybe we'll manage to take any small piece of code and figure out what it means without asking Scalac. Can you imagine?

And we'll maybe know that some properties always hold. My favorite missing property would be that you can always omit unambiguous implicit conversions (nowadays you can't, since they help type inference; but I think subtyping relations can be as complicated and they aren't a problem).

I guess that once you have a specification, then you can try to figure out how to change it and what are the consequences.

With current technology, almost the only way to get there is to have a small language, so that you can do proofs and they fit brains. Some day maybe mechanized theorem proving will make that superfluous, but we're not there yet.

When I think about complexity, right now I think of https://issues.scala-lang.org/browse/SI-6944, on unsoudness with GADTs. I cannot even begin to describe the mess it feels to be. The soundness hole I ran into seems to even affect the spec; however, in some cases the compiler knows better than the spec luckily.

What's interesting there is that in 2007 the problem didn't exist, according to a paper I found; so I'd suspect that some change was later done without checking the impact on GADTs.

If you (generic) come at the Scala workshop 2013, you'll hear my talk on some of the consequences.

Paul Phillips

unread,
Jun 12, 2013, 4:29:19 AM6/12/13
to scala-i...@googlegroups.com

On Tue, Jun 11, 2013 at 8:42 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
And in general, I see Martin's point. And as far as I can tell, both you and Martin want a real specification of Scala and a compiler actually implementing it. He just wants to get there by simplifying Scala. If this succeeds, maybe we'll manage to take any small piece of code and figure out what it means without asking Scalac. Can you imagine?

Let's be careful who we award the "simplifying high ground." According to github, my lifetime contribution to the codebase is -26,000 lines, martin's is +146,000 lines. Pick the top ten most complicated things about scala (language or implementation) and you will find zero of them originated with me. Let's not go telling ourselves (or anyone else) I'm some kind of complexity lover.

And simplifying blindly only makes things complicated again. I am only interested in simple to the extent that it is more useful than the alternatives. All things being equal, simple is better. And all things aren't equal[*].

[*] But all things are comparable, thanks to our simple universal equality.


Paul Phillips

unread,
Jun 12, 2013, 4:30:53 AM6/12/13
to scala-i...@googlegroups.com
On Tue, Jun 11, 2013 at 8:42 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
When I think about complexity, right now I think of https://issues.scala-lang.org/browse/SI-6944, on unsoudness with GADTs. I cannot even begin to describe the mess it feels to be.

Enjoy the view from the tip of that iceberg.

Erik Osheim

unread,
Jun 12, 2013, 11:57:44 AM6/12/13
to scala-i...@googlegroups.com
On Mon, Jun 10, 2013 at 07:17:05PM -0400, Paul Phillips wrote:
> With regard to literals:
>
> def f1(x: Long, y: Double) = List(1l, 2.0)
> def f2(x: Long, y: Double) = List(x, y)
>
> As far as I am concerned there is no compelling reason for the type
> inferencer to be restricted when the value is a literal and it can be
> represented exactly in the type being ascribed to it. It should feel free
> to infer List[Double] for f1.

Agreed. I think wanting integer literals to stand-in for
Long/Float/Double covers 99% of the arguments in favor widening. And
from where we sit in Spire, it's really the "other kind" of widening
that hurts us.

> The other thing is that there is zero reason why any of this has to break
> all the code in the world. We manage to evolve everything else somehow. The
> small step is to make it possible for people to easily opt out of these
> unsafe conversions; the larger one is to make people who want unsafe
> conversions opt into them; in neither case does it break all the code in
> the world any more than any other change we make. Try to compile some scala
> 2.7 code with 2.11. You don't get a lifetime guarantee in this business.

This is a good point. I'd love to forbid unsafe widening in general,
in order of compromise I'd settle for:

1. Forbid unsafe widening
2. (...except for literals)
3. #1/2, or if an option (or language feature) allows it
4. #1/2, or in the absence of an option (or language feature) forbidding it

Anyway, I just wanted to weight in to support Paul and Simon here.

-- Erik

Jeff Olson

unread,
Jun 12, 2013, 12:41:55 PM6/12/13
to scala-i...@googlegroups.com, er...@plastic-idolatry.com
I whole-heartedly agree. At the very least I would like at command line option to forbid unsafe widening of primitives. For literals, widening should be allowed if and only if it's safe. 2147483647 + 1.0f should be a compile error.

-Jeff

Paolo G. Giarrusso

unread,
Jun 13, 2013, 1:18:07 PM6/13/13
to scala-i...@googlegroups.com
On Wednesday, June 12, 2013 10:29:19 AM UTC+2, Paul Phillips wrote:
On Tue, Jun 11, 2013 at 8:42 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
And in general, I see Martin's point. And as far as I can tell, both you and Martin want a real specification of Scala and a compiler actually implementing it. He just wants to get there by simplifying Scala. If this succeeds, maybe we'll manage to take any small piece of code and figure out what it means without asking Scalac. Can you imagine?

Let's be careful who we award the "simplifying high ground." According to github, my lifetime contribution to the codebase is -26,000 lines, martin's is +146,000 lines. Pick the top ten most complicated things about scala (language or implementation) and you will find zero of them originated with me. Let's not go telling ourselves (or anyone else) I'm some kind of complexity lover.

I didn't accuse you of that, and I'm not awarding any *simplifying high ground* — I have great respect for both you and Martin.

I'm praising Martin's plan to formalize more of Scala, I'm not commenting on the current Scala formalization (or lack thereof).

And simplifying blindly only makes things complicated again. I am only interested in simple to the extent that it is more useful than the alternatives. All things being equal, simple is better. And all things aren't equal[*].

All things aren't equal indeed.

But are you factoring in the costs on a formalization? You argued that they are insignificant by offering an implementation, but that doesn't measure the costs on a formalization. Now, of course I can't estimate them either and I just know the basics here, but I have a glimpse of why what you propose is, formally speaking, "strange", so it's not surprising that Martin isn't happy about considering it.

I suspect that if you instead acknowledge that issue, it'd be possible to have a more productive discussion. And maybe you'd end up formalizing it yourself, what do I know? Or prototyping type inference for Dotty (though I don't know whether that helps without the reasoning behind). Or modifying some formalization along the lines you suggest — I guess I'd start from "Colored type inference" or something like that. But take these suggestions with a grain of salt, I'm not sure that's actually a good idea.

Of course, I also find it sad that type inference currently seems to be somewhat frozen waiting for Dotty. For instance, I do suspect that the handling of GADTs could be improved in the current framework (SI-5298 comes to mind, the difference between typed patterns and everything else is another example).

Once we had a formalization, one could actually try to change it to incorporate this or other ideas and still talk about its correctness.

Best,
Paolo

Paul Phillips

unread,
Jun 13, 2013, 1:24:38 PM6/13/13
to scala-i...@googlegroups.com

On Thu, Jun 13, 2013 at 1:18 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
Once we had a formalization, one could actually try to change it to incorporate this or other ideas and still talk about its correctness.

As we live with a specification frozen in 2006 and a language with a dozen kinds of unsoundness, I don't know how I'm supposed to pretend that these things matter. I find that specification and formalization are very important when it's convenient for them to be important, because they bring the full weight of authority to bear on whatever was wanted anyway. The rest of the time? Quiet as mice.


Paolo G. Giarrusso

unread,
Jun 13, 2013, 2:31:39 PM6/13/13
to scala-i...@googlegroups.com
On Thursday, June 13, 2013 7:24:38 PM UTC+2, Paul Phillips wrote:
I don't know how I'm supposed to pretend that these things matter. I find that specification and formalization are very important when it's convenient for them to be important, because they bring the full weight of authority to bear on whatever was wanted anyway. The rest of the time? Quiet as mice.

[...]

As we live with a specification frozen in 2006 and a language with a dozen kinds of unsoundness,
 
Oh, things are *worse* than that: I talked of a formalization, not of the specification, which does not contain a soundness proof. And quite clearly Scalac is not *founded* on a soundness proof. So I only count (some) papers, and I fear there you have to go further back (in the best case, to a paper in ECOOP 2003 AFAIK, but I never actually studied the vObj paper).

Of course I do see your point. But shouldn't somebody fix that by *having* a formalization?

People are working on that, it just ain't easy: see https://github.com/namin/dot and their FOOL'12 paper (which is already not up-to-date anymore). The complex bit is that they don't ignore the (important) crazy bits of Scala in the formalization, so a proof of soundness will actually tell something relevant (for an implementation of that typesystem). And they are mechanizing the proof, so that a computer can check it's correct. As a side effect, I think they'd also get a proven correct typechecker for the core language (in Coq, probably convertible to Ocaml). And apparently there's also a Scala implementation in that repo.

But that also explains why it's taking so much time — unlike any calculus I know, DOT seems to like to live dangerously, which is probably needed to model what Scala allows anyway. Actually, a bit more than that: in 4.2 they have an example like this, which isn't allowed directly in Scala:

val z = new AnyRef { z => type L <: z.L }

But I think Mark Harrah's shows how to encode recursion in the type system, so I think you can encode some types like that already nowadays.

Now, why should DOT or Scala 3 matter on what we do now? I guess because the rule you propose won't be there in Scala 3, so adding it now might not help in the long run. But now I'm just guessing.
The other reason is that adding features without studying them formally has led to the current status. I can see why one would freeze the language for a while. In my small experience, what I learn from SI-5298 is that type inference is close to unmaintained:

Adriaan Moors wrote:
 Type vars that depend on other type vars are not supported well. I'm not sure that can be fixed ( without major changes to the algorithm)
... 
Type inference is not spec'ed, essentially. It's also not that easy to improve it without breaking existing code. We are certainly interested in improving it overall, but I hesitate to add an ad-hoc hack for this case. It needs to be an all-encompassing improvement, with assurances about the effect on existing code, understandability,... That's not going to happen right away. It's just too hard, sorry.

That is, I guess, he doesn't want to rewrite things without doing it right.

Adriaan Moors

unread,
Jun 13, 2013, 5:42:32 PM6/13/13
to scala-i...@googlegroups.com
I wish I could chime in and offer some technical content.
Instead I have to step in and remind everyone of the code of conduct on all our mailing lists, which applies to all its members.

Please refrain from insinuation and do not make this personal. It has to be about designing and implementing the best Scala we can, together. Criticism is fine, as long as it stays focused on that topic.

We're all in this together. Writing and maintaining a spec is hard and extremely time consuming[*].
Of course, so is making the right decisions on language extensions, implementing them, fixing bugs and having important technical discussions like this. We should strive to improve on all of these counts.

However -- and I must insist -- for this to be a productive forum, we must stick to the technical part of the discussion and leave the politics and personal gripes out of it.

thank you,
adriaan

[*] And I'm not even talking about soundness proofs or calculi. These are tremendously challenging even for even a minimal subset of Scala, and will stay out of reach for the full language for quite some time to come. That is not to say we shouldn't aim for the moon. We are getting closer every day we talk shop.



--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

martin odersky

unread,
Jun 14, 2013, 9:43:33 AM6/14/13
to scala-internals
On Thu, Jun 13, 2013 at 11:42 PM, Adriaan Moors <adriaa...@typesafe.com> wrote:
I wish I could chime in and offer some technical content.
Instead I have to step in and remind everyone of the code of conduct on all our mailing lists, which applies to all its members.

Please refrain from insinuation and do not make this personal. It has to be about designing and implementing the best Scala we can, together. Criticism is fine, as long as it stays focused on that topic.

We're all in this together. Writing and maintaining a spec is hard and extremely time consuming[*].
Of course, so is making the right decisions on language extensions, implementing them, fixing bugs and having important technical discussions like this. We should strive to improve on all of these counts.

However -- and I must insist -- for this to be a productive forum, we must stick to the technical part of the discussion and leave the politics and personal gripes out of it.

thank you,
adriaan

[*] And I'm not even talking about soundness proofs or calculi. These are tremendously challenging even for even a minimal subset of Scala, and will stay out of reach for the full language for quite some time to come. That is not to say we shouldn't aim for the moon. We are getting closer every day we talk shop.


Thanks Adriaan!

I personally have decided that the tone of some of the latest discussions on this list causes me more grief than it's worth. I will take a break from reading the list. I'll be back, but right now I have the feeling I need some time off.

Thanks

 - Martin

 
On Thu, Jun 13, 2013 at 1:24 PM, Paul Phillips <pa...@improving.org> wrote:

On Thu, Jun 13, 2013 at 1:18 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
Once we had a formalization, one could actually try to change it to incorporate this or other ideas and still talk about its correctness.

As we live with a specification frozen in 2006 and a language with a dozen kinds of unsoundness, I don't know how I'm supposed to pretend that these things matter. I find that specification and formalization are very important when it's convenient for them to be important, because they bring the full weight of authority to bear on whatever was wanted anyway. The rest of the time? Quiet as mice.


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967
Reply all
Reply to author
Forward
0 new messages