I removed all numeric widening conversions — and not a single test broke

2,397 views
Skip to first unread message

Simon Ochsenreither

unread,
Jun 10, 2013, 4:38:18 AM6/10/13
to scala-i...@googlegroups.com
See https://github.com/soc/scala/commits/topic/no-numeric-conversions

Either we have terrible test coverage or, what seems more likely after testing: The implicit defs were just defined “for fun”, because they don't exhibit any actual effect, e. g. the tons of hardcoded widening behaviour inside the compiler does not supplement the implicit defs, but rather defines all of the behavior, rendering the implicits useless (except of adding some additional implicit conversions to the scope).

Opinions?

Paul Phillips

unread,
Jun 10, 2013, 4:46:51 AM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 1:38 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
Either we have terrible test coverage or, what seems more likely after testing: The implicit defs were just defined “for fun”, because they don't exhibit any actual effect, e. g. the tons of hardcoded widening behaviour inside the compiler does not supplement the implicit defs, but rather defines all of the behavior, rendering the implicits useless (except of adding some additional implicit conversions to the scope).

I can reveal it was not "for fun", since I've tried to remove them before so I know where things break without them. If they don't have any effect on any test it's because something has changed between 2.10 and now - indeed it has - see below.

class A {
  def f[T <: Int](x: T): Long = x
}
/***
% scalac210_2 -Xprint:typer ./a.scala |grep 'def f'
    def f[T >: Nothing <: Int](x: T): Long = scala.this.Int.int2long(x)

% scalac3 -Xprint:typer ./a.scala |grep 'def f'
    def f[T <: Int](x: T): Long = x.toLong
***/


Paul Phillips

unread,
Jun 10, 2013, 5:12:04 AM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 1:46 AM, Paul Phillips <pa...@improving.org> wrote:
something has changed between 2.10 and now

Jason Zaugg

unread,
Jun 10, 2013, 5:17:48 AM6/10/13
to scala-i...@googlegroups.com
I think they were retained for:

scala> def foo[A](a: A)(implicit ev: A => Int) = ev(a)
foo: [A](a: A)(implicit ev: A => Int)Int

scala> reify(foo('!'))
res3: reflect.runtime.universe.Expr[Int] =
Expr[Int]($read.foo('!')({
  ((x) => Char.char2int(x))
}))

We should add tests for that, though.

-jason

Matthew Pocock

unread,
Jun 10, 2013, 6:29:14 AM6/10/13
to scala-i...@googlegroups.com
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Dr Matthew Pocock
Turing ate my hamster LTD

Integrative Bioinformatics Group, School of Computing Science, Newcastle University

skype: matthew.pocock
tel: (0191) 2566550

Paul Phillips

unread,
Jun 10, 2013, 6:33:26 AM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 3:29 AM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Some baby gets scooped into the bathwater, but you can do it already.

% scalac3 -Ywarn-numeric-widen -Xfatal-warnings b.scala 
b.scala:1: warning: implicit numeric widening
class A { def f(x: Int): Long = x }
                                ^
error: No warnings can be incurred under -Xfatal-warnings.
one warning found
one error found


Kevin Wright

unread,
Jun 10, 2013, 6:36:49 AM6/10/13
to scala-i...@googlegroups.com
For reasons best illustrated in humorous comic form: http://www.smbc-comics.com/index.php?db=comics&id=2999#comic

Simon Ochsenreither

unread,
Jun 10, 2013, 8:12:00 AM6/10/13
to scala-i...@googlegroups.com

I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

I agree.

Considering that we live in a world where users can (and do) define their own math types, having a hard-coded implicit widening hierarchy is just archaic.
I would prefer just deprecating them. Everyone who wants to have questionable (because: lossy) implicit conversions, can just define it on their own.

Kotlin doesn't have that cruft, and I didn't see them suffering from the lack of it.

martin odersky

unread,
Jun 10, 2013, 9:46:12 AM6/10/13
to scala-internals
On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew

I'd heartily agree, except for this:

scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

Cheers

 - Martin



--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

Matthew Pocock

unread,
Jun 10, 2013, 9:57:58 AM6/10/13
to scala-i...@googlegroups.com
On 10 June 2013 14:46, martin odersky <martin....@epfl.ch> wrote:



On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew

I'd heartily agree, except for this:

scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

Sure, but then choosing to mangle this into a List[Int] or List[Double] in this case is also not very intuitive (queue wars about what intuitive means and how to measure it). I'd argue that this kind of thing is better being caught by lint/codecheck tooling. When in sane applications do you actually want a list of some generic type such as AnyVal or AnyRef? It's nearly always a mistake.

Matthew

martin odersky

unread,
Jun 10, 2013, 10:06:52 AM6/10/13
to scala-internals
On Mon, Jun 10, 2013 at 3:57 PM, Matthew Pocock <turingate...@gmail.com> wrote:



On 10 June 2013 14:46, martin odersky <martin....@epfl.ch> wrote:



On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew

I'd heartily agree, except for this:

scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

Sure, but then choosing to mangle this into a List[Int] or List[Double] in this case is also not very intuitive (queue wars about what intuitive means and how to measure it). I'd argue that this kind of thing is better being caught by lint/codecheck tooling. When in sane applications do you actually want a list of some generic type such as AnyVal or AnyRef? It's nearly always a mistake.

It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int.  So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.

Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.

Cheers

 - Martin

Simon Ochsenreither

unread,
Jun 10, 2013, 11:51:34 AM6/10/13
to scala-i...@googlegroups.com

It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int.  So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.

I don't think that the current behavior is less buggy:

scala> List(123456789, 0f)
res23: List[Float] = List(1.23456792E8, 0.0)

 
Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.

Agree. In the end, it's about exchanging not-so-nice corner-cases with other not-so-nice corner-cases. But if we can get rid of implicit widening conversion, I don't think it is a zero-sum game anymore.
If people mix different number types the compiler should give them a warning if AnyVal is inferred (we have plenty of diagnostics for Nothing, Unit and Any already).

Not running into weird ambiguities in method overloading resolution would be a nice benefit, too.

Simon Ochsenreither

unread,
Jun 10, 2013, 12:09:19 PM6/10/13
to scala-i...@googlegroups.com
In the end, I don't think we will end up with tons of List[AnyVal]s or similar. (Imho) the main idea behind getting rid of implicit widening conversions is that we want people to be explicit about potentially lossy conversions, both to increase the clarity of the code and reduce the potential of bugs.

Grzegorz Kossakowski

unread,
Jun 10, 2013, 1:56:36 PM6/10/13
to scala-internals
Hi Simon,

I think source compatibility is something we should treat seriously. As Martin explained, both options were tried and we decided long time ago to stick to lossy conversions. In particular, we didn't end up in current situation by accident.

I believe it's beating a dead horse, now.


On 10 June 2013 09:09, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar. (Imho) the main idea behind getting rid of implicit widening conversions is that we want people to be explicit about potentially lossy conversions, both to increase the clarity of the code and reduce the potential of bugs.

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

martin odersky

unread,
Jun 10, 2013, 2:06:54 PM6/10/13
to scala-internals
On Mon, Jun 10, 2013 at 5:51 PM, Simon Ochsenreither <simon.och...@gmail.com> wrote:

It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int.  So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.

I don't think that the current behavior is less buggy:

scala> List(123456789, 0f)
res23: List[Float] = List(1.23456792E8, 0.0)

 

Not sure I understand. Are you saying you will also throw out expressions like 123456789 * 1.0?
I don't believe you can do that and not break any sort of numeric code written in Scala.

For better or worse, Java has an implicit widening from Int (and even Long!) to Float. I believe we have much better things to do than to go back on this one and discuss whether or not it's the right thing to do. I don't really care about the widening, but I do care about needlessly breaking huge amounts of code.

So, IMO, the only question is whether the widening is applied to type parameter inference or it is purely modeled by overloaded methods and implicit conversions, as we did in the early days of Scala. Not applying it to type inference would simplify things in spec and compiler quite a bit.
But it's not so simple to go back., as I have already outlined.

Cheers

 - Martin


 
Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.

Agree. In the end, it's about exchanging not-so-nice corner-cases with other not-so-nice corner-cases. But if we can get rid of implicit widening conversion, I don't think it is a zero-sum game anymore.
If people mix different number types the compiler should give them a warning if AnyVal is inferred (we have plenty of diagnostics for Nothing, Unit and Any already).

Not running into weird ambiguities in method overloading resolution would be a nice benefit, too.

Simon Ochsenreither

unread,
Jun 10, 2013, 2:13:26 PM6/10/13
to scala-i...@googlegroups.com

I think source compatibility is something we should treat seriously. As Martin explained, both options were tried and we decided long time ago to stick to lossy conversions. In particular, we didn't end up in current situation by accident.

I believe it's beating a dead horse, now.

a) I don't see how it would impact source compatibility in any way different from the plethora of deprecations/warnings we already have.
b) I think it is good to revisit some decisions from time to time, especially when we gain more experience on costs and benefits, especially now, when we have a more "healthy" relation to implicits.
c) If "competitors" bulk-copy the language but explicitly drop a "feature", I think it makes sense to look closely at these details.

Simon Ochsenreither

unread,
Jun 10, 2013, 2:25:17 PM6/10/13
to scala-i...@googlegroups.com


On Monday, June 10, 2013 8:06:54 PM UTC+2, martin wrote:
Not sure I understand. Are you saying you will also throw out expressions like 123456789 * 1.0?
I don't believe you can do that and not break any sort of numeric code written in Scala.

No. There is a Double expression right there, interaction with our Int value _and_ the invoked method is overloaded for Doubles.
Compare that to the List example: There is neither an operation applied at all nor is there a Double interacting with it.
 
For better or worse, Java has an implicit widening from Int (and even Long!) to Float. I believe we have much better things to do than to go back on this one and discuss whether or not it's the right thing to do. I don't really care about the widening, but I do care about needlessly breaking huge amounts of code.

So, IMO, the only question is whether the widening is applied to type parameter inference or it is purely modeled by overloaded methods and implicit conversions, as we did in the early days of Scala. Not applying it to type inference would simplify things in spec and compiler quite a bit.
But it's not so simple to go back., as I have already outlined.

What I'm mostly concerned about is the evolution of the language. When the decision was made, Scala didn't have such healthy ecosystem of math libraries with Spire, Breeze, ... what has been right then hasn't to be right now. It would just be a shame to be stuck with hard-coded, special-cased, privileged conversions for some limited set of types, while the rest of the ecosystem tries to move ahead with a wealth of useful numeric types.

Anyway, I don't see how it will break code. Uses (which are probably very limited) of lossy conversions might generate a deprecation warning, just like those hundreds of other deprecations I have seen since 2.7.

Simon Ochsenreither

unread,
Jun 10, 2013, 2:31:02 PM6/10/13
to scala-i...@googlegroups.com

Anyway, I don't see how it will break code. Uses (which are probably very limited) of lossy conversions might generate a deprecation warning, just like those hundreds of other deprecations I have seen since 2.7.

Not to mention that List[AnyVal] makes it easier to migrate to a more precise type like List[Int|Double] as soon as union types arrive in Scala. :-)

Paolo G. Giarrusso

unread,
Jun 10, 2013, 2:48:37 PM6/10/13
to scala-i...@googlegroups.com
What about  -Ywarn-numeric-widen? Would it help more to have it turned into a supported option (without a Y prefix)? Or do you require also having -Yno-numeric-widen (with/without the Y prefix)? Because probably it's easier to lobby for that, or get a pull request accepted (at least with the -Y).

My guess is that the people annoyed by this behavior are mostly advanced users (or at least, non-beginners), so having them do the extra work of adding an option might be a reasonable compromise. I don't know if there's a beginner-friendly policy for Scala, but I think its less-steep-than-Haskell learning curve is a reason for its success (though I don't know which language is easier for advanced users, if we ignore the huge extensibility advantage of Scala).

Rex Kerr

unread,
Jun 10, 2013, 3:01:41 PM6/10/13
to scala-i...@googlegroups.com
You don't write code the way I do, then.  Probably 80% of my source files would require a change if List(1,2.5) evaluates to List[Any], or if max(0, 1.0/x) isn't the double max.

Moving -Ywarn-numeric-widen to -X is fine, though, for those who write more correct code when they have to hit the d key more often.

  --Rex
 

Simon Ochsenreither

unread,
Jun 10, 2013, 3:05:47 PM6/10/13
to scala-i...@googlegroups.com

What about  -Ywarn-numeric-widen? Would it help more to have it turned into a supported option (without a Y prefix)? Or do you require also having -Yno-numeric-widen (with/without the Y prefix)? Because probably it's easier to lobby for that, or get a pull request accepted (at least with the -Y).

Agree. Maybe even a language import?

import language.doCrazyStuffWithNumbers

My guess is that the people annoyed by this behavior are mostly advanced users (or at least, non-beginners), so having them do the extra work of adding an option might be a reasonable compromise. I don't know if there's a beginner-friendly policy for Scala, but I think its less-steep-than-Haskell learning curve is a reason for its success (though I don't know which language is easier for advanced users, if we ignore the huge extensibility advantage of Scala).

Hard to say. I don't think it is that likely to hit an issue related to implicit widenings compared to the other crazy stuff we throw in front of people learning the language. As an example, I was helping in a Scala workshop recently and we hit 1.+(2) in the first 20 minutes and in the coding exercise later, 5 out of 15 people ran into the procedure trap.

It's not as worse as these things, although the issue is a lot harder to debug and explain when it actually comes up.
In this case my motivating factors are a) not carrying around stuff where even the designers of Java say that it was a terrible mistake b) reducing the amount of magic happening per line of code.

Paul Phillips

unread,
Jun 10, 2013, 4:48:43 PM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 6:46 AM, martin odersky <martin....@epfl.ch> wrote:
scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

On the other other hand, there's this... even less intuitive given that it worked at level one.

scala> List(List(1), List(2.0))
res0: List[List[AnyVal]] = List(List(1), List(2.0))

Paul Phillips

unread,
Jun 10, 2013, 4:52:46 PM6/10/13
to scala-i...@googlegroups.com
On Mon, Jun 10, 2013 at 9:09 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar.

As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].

I forget why it's not part of this warning (under -Xlint.)

scala> List(1, "abc")
<console>:8: warning: a type was inferred to be `Any`; this may indicate a programming error.
              List(1, "abc")
                   ^
res1: List[Any] = List(1, abc)

Paul Phillips

unread,
Jun 10, 2013, 4:56:00 PM6/10/13
to scala-i...@googlegroups.com
On Mon, Jun 10, 2013 at 12:01 PM, Rex Kerr <ich...@gmail.com> wrote:
Moving -Ywarn-numeric-widen to -X is fine, though, for those who write more correct code when they have to hit the d key more often.

There's no need to start with the caricature. I'm surprised if you can't see any reason why people might be reluctant to enshrine lossy implicit conversions as a permanent fixture in the default scope.

Rex Kerr

unread,
Jun 10, 2013, 6:58:41 PM6/10/13
to scala-i...@googlegroups.com
Maybe the caricature is too unkind.  If so, sorry; I'm letting extraneous frustrations leak through to here.  And I do see the problems with automatic Long->Float especially.  Augh.  Who ever thought that was a good idea?--but it's now so well established that it probably needs to be supported approximately forever.

I didn't really mean it as a put-down, though, because in practice this is exactly what it boils down to: run through your code hitting d in all the right places.

I should know: I do this far too often when writing mutable code:

  var x = 0
  for (y <- ys) x += sqrt(y)

and have to come back with the missing d (or .0, depending on style).

I am occasionally a proud member of the more-d-makes-my-code-more-correct club.

  --Rex




--

Paolo G. Giarrusso

unread,
Jun 10, 2013, 8:11:31 PM6/10/13
to scala-i...@googlegroups.com
On Tuesday, June 11, 2013 12:58:41 AM UTC+2, Rex Kerr wrote:
Maybe the caricature is too unkind.  If so, sorry; I'm letting extraneous frustrations leak through to here.  And I do see the problems with automatic Long->Float especially.  Augh.  Who ever thought that was a good idea?
 
Kernighan & Ritchie?

But I don't know a language with a good story for overloading of numeric literals. If you want to know what *not* to do *for beginners*, turn to Haskell. Following their approach, we'd get `1: [T: Numeric]T` and need explicit conversions. Which might be reasonable for advanced users, but it does keep getting in the way, and it's indeed hard for beginners (so much somebody maintained for years a Haskell for beginners without this feature, called Helium). OTOH, it's not like using Hindley-Milner would allow Haskell to have any implicit conversion whatsoever.

--but it's now so well established that it probably needs to be supported approximately forever.

I didn't really mean it as a put-down, though, because in practice this is exactly what it boils down to: run through your code hitting d in all the right places.

That's just for constants. And you'll see some places where the compiler gives a type error and you could hit d, but don't want to.
Now it'd be cool if those `d` helped the code reader noticeably, but I have no idea whether that's the case.

Paul Phillips

unread,
Jun 10, 2013, 8:18:50 PM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 8:11 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
Which might be reasonable for advanced users, but it does keep getting in the way, and it's indeed hard for beginners (so much somebody maintained for years a Haskell for beginners without this feature, called Helium).

One way to look at things might be "fast, correct, implicit: pick any two"

- currently we choose "fast and implicit", sacrificing correct
- I would prefer "fast and correct", sacrificing implicit
- and I think the beginner would do best with "correct and implicit", sacrificing fast

Correct and implicit, sacrificing fast - this translates to using arbitrary precision for everything, like groovy does or maybe did.


Simon Ochsenreither

unread,
Jun 11, 2013, 7:30:56 AM6/11/13
to scala-i...@googlegroups.com

On the other other hand, there's this... even less intuitive given that it worked at level one.

scala> List(List(1), List(2.0))
res0: List[List[AnyVal]] = List(List(1), List(2.0))

Wow, ok. I expected it to work a bit better than papering over it in such a skin-deep manner.

Imho, then the current situation is considerably worse, if it doesn't even work consistently in any non-superficial example.

martin odersky

unread,
Jun 11, 2013, 9:39:11 AM6/11/13
to scala-internals
On Mon, Jun 10, 2013 at 10:52 PM, Paul Phillips <paulp@improving.org> wrote:

On Mon, Jun 10, 2013 at 9:09 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar.

As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].


I sympathize with that point of view, but we don't have a choice in the matter. Inference infers what it infers under general rules. And sometimes AnyVal is the best type according to the rules. The corner-stone of these rules is that type inference should infer the best solution of a constraint system, according to some partial order. The partial order is weak subtyping, i.e. subtyping augmented by numeric widening (*). I see the attraction of simplifying the partial order and go to plain subtyping instead. But then we _will_ infer List[AnyVal] as the type of List(1, 2.0).

Now, one could say let's just add extra rules and tweaks to make cases that we think make no sense go away. These tweaks always look good first. But in the long run they doom you, because your type checking becomes an unpredictable mess of interacting tweaks, each of which is harmless in isolation.

Cheers

 - Martin 

(*) I know that we make an exception to that principle  in that we truncate lubs and glbs in ad-hoc ways to keep the types from exploding. It's something I am working hard to get rid of. By no means should we take it as a precedence to add more ad-hoc stuff to type inference.


Paul Phillips

unread,
Jun 11, 2013, 9:54:29 AM6/11/13
to scala-i...@googlegroups.com

On Tue, Jun 11, 2013 at 9:39 AM, martin odersky <martin....@epfl.ch> wrote:
but we don't have a choice in the matter

One of us doesn't have a choice in the matter, and the other doesn't want a choice in the matter, but neither of us is required by some immutable law not to have a choice in the matter.

At its simplest, all we have to do is not infer types beyond a certain level of generality, on the basis that inferring such types enables far more errors than it prevents. Not inferring Any or AnyVal does not require an assortment of ad hoc rules. It only requires not inferring those types.


Francois

unread,
Jun 11, 2013, 10:01:10 AM6/11/13
to scala-i...@googlegroups.com, martin odersky
On 11/06/2013 15:39, martin odersky wrote:



On Mon, Jun 10, 2013 at 10:52 PM, Paul Phillips <paulp@improving.org> wrote:

On Mon, Jun 10, 2013 at 9:09 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar.

As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].


I sympathize with that point of view, but we don't have a choice in the matter. Inference infers what it infers under general rules. And sometimes AnyVal is the best type according to the rules. The corner-stone of these rules is that type inference should infer the best solution of a constraint system, according to some partial order. The partial order is weak subtyping, i.e. subtyping augmented by numeric widening (*). I see the attraction of simplifying the partial order and go to plain subtyping instead. But then we _will_ infer List[AnyVal] as the type of List(1, 2.0).

Now, one could say let's just add extra rules and tweaks to make cases that we think make no sense go away. These tweaks always look good first. But in the long run they doom you, because your type checking becomes an unpredictable mess of interacting tweaks, each of which is harmless in isolation.


Sorry for widelly appearing in that discussion as a simple observer, but I believe that Paul is just trying to say that inferencing for certain too general type is not only useless but even harmful for the coder, and I believe that the types in question are only Any/Anyref/Anyval.
Personnaly, each (really very very rare time) I use these types, I take care of making them explicit and comment why it's not an error. And in ~7 years of Scala, I can't remember even one time I wished to get these types infered (and not simply forget to change one object type in a method or when creating a collection).
But again, I surelly don't see all the implication of such a rule.

Cheers,

--
Francois ARMAND
http://fanf42.blogspot.com
http://www.normation.com

Francois

unread,
Jun 11, 2013, 10:02:08 AM6/11/13
to scala-i...@googlegroups.com, Paul Phillips
Sooo you were far faster than me to write an email.

Matthew Pocock

unread,
Jun 11, 2013, 10:09:36 AM6/11/13
to scala-i...@googlegroups.com

I have to agree. There's not been a single case where an inference of the upper types is the intended type in my code. It is always a code smell and usually a mistake. Very occasionally I need to explicitly work with one of these as a return value but usually this is a big red flag that I have the wrong type parameters or am missing a type class.

Sent from my android - may contain predictive text entertainment.

Paul Phillips

unread,
Jun 11, 2013, 10:46:41 AM6/11/13
to scala-i...@googlegroups.com

On Tue, Jun 11, 2013 at 7:30 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
Wow, ok. I expected it to work a bit better than papering over it in such a skin-deep manner.

Here's another way of looking at it.

scala> def f(x: Int, y: Long) = List(x, y)
f: (x: Int, y: Long)List[Long]

scala> def f[T >: Int <: Int, U >: Long <: Long](x: T, y: U) = List(x, y)
f: [T >: Int <: Int, U >: Long <: Long](x: T, y: U)List[AnyVal]

I wonder how often behavior diverges between concrete type A and parameterized type T >: A <: A. It seems like an undesirable quality in a type system.

Jason Zaugg

unread,
Jun 11, 2013, 10:54:59 AM6/11/13
to scala-i...@googlegroups.com
The problem I see with this (outside of the original problem of List(1, 2d)) is that such a rule will reveal the next level of "useless" types, e.g:

List(Person("Bob"), Widget("wizzle")) // List[Serializable with Product] 

-jason

Paul Phillips

unread,
Jun 11, 2013, 11:00:58 AM6/11/13
to scala-i...@googlegroups.com
On Tue, Jun 11, 2013 at 10:54 AM, Jason Zaugg <jza...@gmail.com> wrote:
The problem I see with this (outside of the original problem of List(1, 2d)) is that such a rule will reveal the next level of "useless" types, e.g:

List(Person("Bob"), Widget("wizzle")) // List[Serializable with Product] 

Indeed, this is the best objection of which I am aware.

It's interesting that we all know this is a useless type to infer. Me, I think it would be possible to translate the "useless inference" intuition into code, and/or allow the end programmer to influence the definition of useless. I think a type system which took steps not to make useless inferences would be a far superior type system. I understand that some people value generality above all else, but I am not among them.

That said, most things we do don't completely solve most problems, but we still do things.

 

martin odersky

unread,
Jun 11, 2013, 11:23:41 AM6/11/13