On Tuesday, June 11, 2013 1:17:05 AM UTC+2, Paul Phillips wrote:
There is conflation of some quite different things taking place in the thread about widening, and I would like to disentangle it. Things being conflated include 1) widening of literals vs. widening of typed values, 2) what it would mean not to widen (it DOESN'T have to mean, nor should it mean, that you start inferring List[Any] or List[AnyVal] all over the place) and 3) genuine "widening" (e.g. Byte -> Int) vs. lossy "pseudo-widening" (e.g. Long -> Double/Float).
With regard to literals:
def f1(x: Long, y: Double) = List(1l, 2.0)
def f2(x: Long, y: Double) = List(x, y)
As far as I am concerned there is no compelling reason for the type inferencer to be restricted when the value is a literal and it can be represented exactly in the type being ascribed to it. It should feel free to infer List[Double] for f1.
f2 is the problem. I propose that if you desperately need f2 to compile without alteration, you are living too dangerously and we should not help you. If it were up to me, f2 wouldn't compile at all. No inferring AnyVal or Any, it is a straight compile-time error. Here are several ways you could write it so it would compile, depending on what you wanted to emerge from the cloud of dust:
def f2(x: Long, y: Double) = List[Any](x, y)
def f2(x: Long, y: Double) = List[AnyVal](x, y)
def f2(x: Long, y: Double) = List(x.toDouble, y)
Maybe you can merge a -Y option for that? I'd propose making the error you showed in the other thread a bit more explicit (Why is inferring "Any" forbidden?), but that's less relevant if that's not the default behavior.
If you are in the throes of the hobgoblins of foolish consistency, perhaps you object to List(q1, q2) ever not compiling, because there's always a type which COULD be inferred; at worst, it is a List[Any]. I don't share this position because
You say:
I think type systems should serve us, not the other way around.
While that's nice, you can't always have that. The first thing about type systems is that they reject valid programs, so if you use a type system you have to conform to its rules. Hence the question is about the tradeoff.
The other thing is that there is zero reason why any of this has to break all the code in the world. We manage to evolve everything else somehow. The small step is to make it possible for people to easily opt out of these unsafe conversions; the larger one is to make people who want unsafe conversions opt into them; in neither case does it break all the code in the world any more than any other change we make. Try to compile some scala 2.7 code with 2.11. You don't get a lifetime guarantee in this business.
Independently of the positions being offered here, Simon is a lot more convincing because he has to come up with real reasons. Those for the status quo have the luxury of saying "that's how it is" and indeed that's how it is. But there's no reason anyone should walk away from this thinking that's how it has to be. It doesn't. It wouldn't even be difficult to evolve. The difficult part is overcoming the obstructionism; the really difficult part is that the status quo can win any argument simply by withdrawing attention.
If we were picking kickball teams, "inertia" would be my first pick, every time. Have to get that guy on my side.
You like soundness? Well, soundness second name is inertia. If we had a soundness proof for Scala and wanted to keep it working, many less things could be changed without first checking the proofs. Similarly for other proofs of useful properties. Typical languages don't have soundness proofs for everything, but you still need to be careful.
This proposal makes it more complex to talk about *correctness* (or in particular, completeness) of type inference - it makes things worse under the standard criterion "type inference infers types if it is possible to do so".
And I guess that Martin wants to study such issues (for Scala 3). So, can you define a completeness criterion where this is not a regression and which is no more complex?
Martin wrote:
To stay sane with type inference (and I am not saying we are there yet!) you have to ruthlessly simplify. There's a constant tension between keeping things clear and catering to engineering concerns. I believe Scala has erred a bit too much towards catering to engineering concerns, at the expense of greatly complicating its model of types and type inference. Right now, I would hope the pendulum can swing a bit in the other direction, towards clear foundations supported by a faithful compiler.
And in general, I see Martin's point. And as far as I can tell, both you and Martin want a real specification of Scala and a compiler actually implementing it. He just wants to get there by simplifying Scala. If this succeeds, maybe we'll manage to take any small piece of code and figure out what it means without asking Scalac. Can you imagine?
And we'll maybe know that some properties always hold. My favorite missing property would be that you can always omit unambiguous implicit conversions (nowadays you can't, since they help type inference; but I think subtyping relations can be as complicated and they aren't a problem).
I guess that once you have a specification, then you can try to figure out how to change it and what are the consequences.
With current technology, almost the only way to get there is to have a small language, so that you can do proofs and they fit brains. Some day maybe mechanized theorem proving will make that superfluous, but we're not there yet.
When I think about complexity, right now I think of
https://issues.scala-lang.org/browse/SI-6944, on unsoudness with GADTs. I cannot even begin to describe the mess it feels to be. The soundness hole I ran into seems to even affect the spec; however, in some cases the compiler knows better than the spec luckily.
What's interesting there is that in 2007 the problem didn't exist, according to a paper I found; so I'd suspect that some change was later done without checking the impact on GADTs.
If you (generic) come at the Scala workshop 2013, you'll hear my talk on some of the consequences.