I removed all numeric widening conversions — and not a single test broke

2030 views
Skip to first unread message

Simon Ochsenreither

unread,
Jun 10, 2013, 4:38:18 AM6/10/13
to scala-i...@googlegroups.com
See https://github.com/soc/scala/commits/topic/no-numeric-conversions

Either we have terrible test coverage or, what seems more likely after testing: The implicit defs were just defined “for fun”, because they don't exhibit any actual effect, e. g. the tons of hardcoded widening behaviour inside the compiler does not supplement the implicit defs, but rather defines all of the behavior, rendering the implicits useless (except of adding some additional implicit conversions to the scope).

Opinions?

Paul Phillips

unread,
Jun 10, 2013, 4:46:51 AM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 1:38 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
Either we have terrible test coverage or, what seems more likely after testing: The implicit defs were just defined “for fun”, because they don't exhibit any actual effect, e. g. the tons of hardcoded widening behaviour inside the compiler does not supplement the implicit defs, but rather defines all of the behavior, rendering the implicits useless (except of adding some additional implicit conversions to the scope).

I can reveal it was not "for fun", since I've tried to remove them before so I know where things break without them. If they don't have any effect on any test it's because something has changed between 2.10 and now - indeed it has - see below.

class A {
  def f[T <: Int](x: T): Long = x
}
/***
% scalac210_2 -Xprint:typer ./a.scala |grep 'def f'
    def f[T >: Nothing <: Int](x: T): Long = scala.this.Int.int2long(x)

% scalac3 -Xprint:typer ./a.scala |grep 'def f'
    def f[T <: Int](x: T): Long = x.toLong
***/


Paul Phillips

unread,
Jun 10, 2013, 5:12:04 AM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 1:46 AM, Paul Phillips <pa...@improving.org> wrote:
something has changed between 2.10 and now

Jason Zaugg

unread,
Jun 10, 2013, 5:17:48 AM6/10/13
to scala-i...@googlegroups.com
I think they were retained for:

scala> def foo[A](a: A)(implicit ev: A => Int) = ev(a)
foo: [A](a: A)(implicit ev: A => Int)Int

scala> reify(foo('!'))
res3: reflect.runtime.universe.Expr[Int] =
Expr[Int]($read.foo('!')({
  ((x) => Char.char2int(x))
}))

We should add tests for that, though.

-jason

Matthew Pocock

unread,
Jun 10, 2013, 6:29:14 AM6/10/13
to scala-i...@googlegroups.com
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Dr Matthew Pocock
Turing ate my hamster LTD

Integrative Bioinformatics Group, School of Computing Science, Newcastle University

skype: matthew.pocock
tel: (0191) 2566550

Paul Phillips

unread,
Jun 10, 2013, 6:33:26 AM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 3:29 AM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Some baby gets scooped into the bathwater, but you can do it already.

% scalac3 -Ywarn-numeric-widen -Xfatal-warnings b.scala 
b.scala:1: warning: implicit numeric widening
class A { def f(x: Int): Long = x }
                                ^
error: No warnings can be incurred under -Xfatal-warnings.
one warning found
one error found


Kevin Wright

unread,
Jun 10, 2013, 6:36:49 AM6/10/13
to scala-i...@googlegroups.com
For reasons best illustrated in humorous comic form: http://www.smbc-comics.com/index.php?db=comics&id=2999#comic

Simon Ochsenreither

unread,
Jun 10, 2013, 8:12:00 AM6/10/13
to scala-i...@googlegroups.com

I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

I agree.

Considering that we live in a world where users can (and do) define their own math types, having a hard-coded implicit widening hierarchy is just archaic.
I would prefer just deprecating them. Everyone who wants to have questionable (because: lossy) implicit conversions, can just define it on their own.

Kotlin doesn't have that cruft, and I didn't see them suffering from the lack of it.

martin odersky

unread,
Jun 10, 2013, 9:46:12 AM6/10/13
to scala-internals
On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew

I'd heartily agree, except for this:

scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

Cheers

 - Martin



--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

Matthew Pocock

unread,
Jun 10, 2013, 9:57:58 AM6/10/13
to scala-i...@googlegroups.com
On 10 June 2013 14:46, martin odersky <martin....@epfl.ch> wrote:



On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew

I'd heartily agree, except for this:

scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

Sure, but then choosing to mangle this into a List[Int] or List[Double] in this case is also not very intuitive (queue wars about what intuitive means and how to measure it). I'd argue that this kind of thing is better being caught by lint/codecheck tooling. When in sane applications do you actually want a list of some generic type such as AnyVal or AnyRef? It's nearly always a mistake.

Matthew

martin odersky

unread,
Jun 10, 2013, 10:06:52 AM6/10/13
to scala-internals
On Mon, Jun 10, 2013 at 3:57 PM, Matthew Pocock <turingate...@gmail.com> wrote:



On 10 June 2013 14:46, martin odersky <martin....@epfl.ch> wrote:



On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.

Matthew

I'd heartily agree, except for this:

scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

Sure, but then choosing to mangle this into a List[Int] or List[Double] in this case is also not very intuitive (queue wars about what intuitive means and how to measure it). I'd argue that this kind of thing is better being caught by lint/codecheck tooling. When in sane applications do you actually want a list of some generic type such as AnyVal or AnyRef? It's nearly always a mistake.

It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int.  So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.

Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.

Cheers

 - Martin

Simon Ochsenreither

unread,
Jun 10, 2013, 11:51:34 AM6/10/13
to scala-i...@googlegroups.com

It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int.  So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.

I don't think that the current behavior is less buggy:

scala> List(123456789, 0f)
res23: List[Float] = List(1.23456792E8, 0.0)

 
Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.

Agree. In the end, it's about exchanging not-so-nice corner-cases with other not-so-nice corner-cases. But if we can get rid of implicit widening conversion, I don't think it is a zero-sum game anymore.
If people mix different number types the compiler should give them a warning if AnyVal is inferred (we have plenty of diagnostics for Nothing, Unit and Any already).

Not running into weird ambiguities in method overloading resolution would be a nice benefit, too.

Simon Ochsenreither

unread,
Jun 10, 2013, 12:09:19 PM6/10/13
to scala-i...@googlegroups.com
In the end, I don't think we will end up with tons of List[AnyVal]s or similar. (Imho) the main idea behind getting rid of implicit widening conversions is that we want people to be explicit about potentially lossy conversions, both to increase the clarity of the code and reduce the potential of bugs.

Grzegorz Kossakowski

unread,
Jun 10, 2013, 1:56:36 PM6/10/13
to scala-internals
Hi Simon,

I think source compatibility is something we should treat seriously. As Martin explained, both options were tried and we decided long time ago to stick to lossy conversions. In particular, we didn't end up in current situation by accident.

I believe it's beating a dead horse, now.


On 10 June 2013 09:09, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar. (Imho) the main idea behind getting rid of implicit widening conversions is that we want people to be explicit about potentially lossy conversions, both to increase the clarity of the code and reduce the potential of bugs.

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

martin odersky

unread,
Jun 10, 2013, 2:06:54 PM6/10/13
to scala-internals
On Mon, Jun 10, 2013 at 5:51 PM, Simon Ochsenreither <simon.och...@gmail.com> wrote:

It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int.  So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.

I don't think that the current behavior is less buggy:

scala> List(123456789, 0f)
res23: List[Float] = List(1.23456792E8, 0.0)

 

Not sure I understand. Are you saying you will also throw out expressions like 123456789 * 1.0?
I don't believe you can do that and not break any sort of numeric code written in Scala.

For better or worse, Java has an implicit widening from Int (and even Long!) to Float. I believe we have much better things to do than to go back on this one and discuss whether or not it's the right thing to do. I don't really care about the widening, but I do care about needlessly breaking huge amounts of code.

So, IMO, the only question is whether the widening is applied to type parameter inference or it is purely modeled by overloaded methods and implicit conversions, as we did in the early days of Scala. Not applying it to type inference would simplify things in spec and compiler quite a bit.
But it's not so simple to go back., as I have already outlined.

Cheers

 - Martin


 
Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.

Agree. In the end, it's about exchanging not-so-nice corner-cases with other not-so-nice corner-cases. But if we can get rid of implicit widening conversion, I don't think it is a zero-sum game anymore.
If people mix different number types the compiler should give them a warning if AnyVal is inferred (we have plenty of diagnostics for Nothing, Unit and Any already).

Not running into weird ambiguities in method overloading resolution would be a nice benefit, too.

Simon Ochsenreither

unread,
Jun 10, 2013, 2:13:26 PM6/10/13
to scala-i...@googlegroups.com

I think source compatibility is something we should treat seriously. As Martin explained, both options were tried and we decided long time ago to stick to lossy conversions. In particular, we didn't end up in current situation by accident.

I believe it's beating a dead horse, now.

a) I don't see how it would impact source compatibility in any way different from the plethora of deprecations/warnings we already have.
b) I think it is good to revisit some decisions from time to time, especially when we gain more experience on costs and benefits, especially now, when we have a more "healthy" relation to implicits.
c) If "competitors" bulk-copy the language but explicitly drop a "feature", I think it makes sense to look closely at these details.

Simon Ochsenreither

unread,
Jun 10, 2013, 2:25:17 PM6/10/13
to scala-i...@googlegroups.com


On Monday, June 10, 2013 8:06:54 PM UTC+2, martin wrote:
Not sure I understand. Are you saying you will also throw out expressions like 123456789 * 1.0?
I don't believe you can do that and not break any sort of numeric code written in Scala.

No. There is a Double expression right there, interaction with our Int value _and_ the invoked method is overloaded for Doubles.
Compare that to the List example: There is neither an operation applied at all nor is there a Double interacting with it.
 
For better or worse, Java has an implicit widening from Int (and even Long!) to Float. I believe we have much better things to do than to go back on this one and discuss whether or not it's the right thing to do. I don't really care about the widening, but I do care about needlessly breaking huge amounts of code.

So, IMO, the only question is whether the widening is applied to type parameter inference or it is purely modeled by overloaded methods and implicit conversions, as we did in the early days of Scala. Not applying it to type inference would simplify things in spec and compiler quite a bit.
But it's not so simple to go back., as I have already outlined.

What I'm mostly concerned about is the evolution of the language. When the decision was made, Scala didn't have such healthy ecosystem of math libraries with Spire, Breeze, ... what has been right then hasn't to be right now. It would just be a shame to be stuck with hard-coded, special-cased, privileged conversions for some limited set of types, while the rest of the ecosystem tries to move ahead with a wealth of useful numeric types.

Anyway, I don't see how it will break code. Uses (which are probably very limited) of lossy conversions might generate a deprecation warning, just like those hundreds of other deprecations I have seen since 2.7.

Simon Ochsenreither

unread,
Jun 10, 2013, 2:31:02 PM6/10/13
to scala-i...@googlegroups.com

Anyway, I don't see how it will break code. Uses (which are probably very limited) of lossy conversions might generate a deprecation warning, just like those hundreds of other deprecations I have seen since 2.7.

Not to mention that List[AnyVal] makes it easier to migrate to a more precise type like List[Int|Double] as soon as union types arrive in Scala. :-)

Paolo G. Giarrusso

unread,
Jun 10, 2013, 2:48:37 PM6/10/13
to scala-i...@googlegroups.com
What about  -Ywarn-numeric-widen? Would it help more to have it turned into a supported option (without a Y prefix)? Or do you require also having -Yno-numeric-widen (with/without the Y prefix)? Because probably it's easier to lobby for that, or get a pull request accepted (at least with the -Y).

My guess is that the people annoyed by this behavior are mostly advanced users (or at least, non-beginners), so having them do the extra work of adding an option might be a reasonable compromise. I don't know if there's a beginner-friendly policy for Scala, but I think its less-steep-than-Haskell learning curve is a reason for its success (though I don't know which language is easier for advanced users, if we ignore the huge extensibility advantage of Scala).

Rex Kerr

unread,
Jun 10, 2013, 3:01:41 PM6/10/13
to scala-i...@googlegroups.com
You don't write code the way I do, then.  Probably 80% of my source files would require a change if List(1,2.5) evaluates to List[Any], or if max(0, 1.0/x) isn't the double max.

Moving -Ywarn-numeric-widen to -X is fine, though, for those who write more correct code when they have to hit the d key more often.

  --Rex
 

Simon Ochsenreither

unread,
Jun 10, 2013, 3:05:47 PM6/10/13
to scala-i...@googlegroups.com

What about  -Ywarn-numeric-widen? Would it help more to have it turned into a supported option (without a Y prefix)? Or do you require also having -Yno-numeric-widen (with/without the Y prefix)? Because probably it's easier to lobby for that, or get a pull request accepted (at least with the -Y).

Agree. Maybe even a language import?

import language.doCrazyStuffWithNumbers

My guess is that the people annoyed by this behavior are mostly advanced users (or at least, non-beginners), so having them do the extra work of adding an option might be a reasonable compromise. I don't know if there's a beginner-friendly policy for Scala, but I think its less-steep-than-Haskell learning curve is a reason for its success (though I don't know which language is easier for advanced users, if we ignore the huge extensibility advantage of Scala).

Hard to say. I don't think it is that likely to hit an issue related to implicit widenings compared to the other crazy stuff we throw in front of people learning the language. As an example, I was helping in a Scala workshop recently and we hit 1.+(2) in the first 20 minutes and in the coding exercise later, 5 out of 15 people ran into the procedure trap.

It's not as worse as these things, although the issue is a lot harder to debug and explain when it actually comes up.
In this case my motivating factors are a) not carrying around stuff where even the designers of Java say that it was a terrible mistake b) reducing the amount of magic happening per line of code.

Paul Phillips

unread,
Jun 10, 2013, 4:48:43 PM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 6:46 AM, martin odersky <martin....@epfl.ch> wrote:
scala-hypothetical> val x = List(1, 2.0)
scala-hypothetical> x: List[AnyVal]

Not very intuitive...

On the other other hand, there's this... even less intuitive given that it worked at level one.

scala> List(List(1), List(2.0))
res0: List[List[AnyVal]] = List(List(1), List(2.0))

Paul Phillips

unread,
Jun 10, 2013, 4:52:46 PM6/10/13
to scala-i...@googlegroups.com
On Mon, Jun 10, 2013 at 9:09 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar.

As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].

I forget why it's not part of this warning (under -Xlint.)

scala> List(1, "abc")
<console>:8: warning: a type was inferred to be `Any`; this may indicate a programming error.
              List(1, "abc")
                   ^
res1: List[Any] = List(1, abc)

Paul Phillips

unread,
Jun 10, 2013, 4:56:00 PM6/10/13
to scala-i...@googlegroups.com
On Mon, Jun 10, 2013 at 12:01 PM, Rex Kerr <ich...@gmail.com> wrote:
Moving -Ywarn-numeric-widen to -X is fine, though, for those who write more correct code when they have to hit the d key more often.

There's no need to start with the caricature. I'm surprised if you can't see any reason why people might be reluctant to enshrine lossy implicit conversions as a permanent fixture in the default scope.

Rex Kerr

unread,
Jun 10, 2013, 6:58:41 PM6/10/13
to scala-i...@googlegroups.com
Maybe the caricature is too unkind.  If so, sorry; I'm letting extraneous frustrations leak through to here.  And I do see the problems with automatic Long->Float especially.  Augh.  Who ever thought that was a good idea?--but it's now so well established that it probably needs to be supported approximately forever.

I didn't really mean it as a put-down, though, because in practice this is exactly what it boils down to: run through your code hitting d in all the right places.

I should know: I do this far too often when writing mutable code:

  var x = 0
  for (y <- ys) x += sqrt(y)

and have to come back with the missing d (or .0, depending on style).

I am occasionally a proud member of the more-d-makes-my-code-more-correct club.

  --Rex




--

Paolo G. Giarrusso

unread,
Jun 10, 2013, 8:11:31 PM6/10/13
to scala-i...@googlegroups.com
On Tuesday, June 11, 2013 12:58:41 AM UTC+2, Rex Kerr wrote:
Maybe the caricature is too unkind.  If so, sorry; I'm letting extraneous frustrations leak through to here.  And I do see the problems with automatic Long->Float especially.  Augh.  Who ever thought that was a good idea?
 
Kernighan & Ritchie?

But I don't know a language with a good story for overloading of numeric literals. If you want to know what *not* to do *for beginners*, turn to Haskell. Following their approach, we'd get `1: [T: Numeric]T` and need explicit conversions. Which might be reasonable for advanced users, but it does keep getting in the way, and it's indeed hard for beginners (so much somebody maintained for years a Haskell for beginners without this feature, called Helium). OTOH, it's not like using Hindley-Milner would allow Haskell to have any implicit conversion whatsoever.

--but it's now so well established that it probably needs to be supported approximately forever.

I didn't really mean it as a put-down, though, because in practice this is exactly what it boils down to: run through your code hitting d in all the right places.

That's just for constants. And you'll see some places where the compiler gives a type error and you could hit d, but don't want to.
Now it'd be cool if those `d` helped the code reader noticeably, but I have no idea whether that's the case.

Paul Phillips

unread,
Jun 10, 2013, 8:18:50 PM6/10/13
to scala-i...@googlegroups.com

On Mon, Jun 10, 2013 at 8:11 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
Which might be reasonable for advanced users, but it does keep getting in the way, and it's indeed hard for beginners (so much somebody maintained for years a Haskell for beginners without this feature, called Helium).

One way to look at things might be "fast, correct, implicit: pick any two"

- currently we choose "fast and implicit", sacrificing correct
- I would prefer "fast and correct", sacrificing implicit
- and I think the beginner would do best with "correct and implicit", sacrificing fast

Correct and implicit, sacrificing fast - this translates to using arbitrary precision for everything, like groovy does or maybe did.


Simon Ochsenreither

unread,
Jun 11, 2013, 7:30:56 AM6/11/13
to scala-i...@googlegroups.com

On the other other hand, there's this... even less intuitive given that it worked at level one.

scala> List(List(1), List(2.0))
res0: List[List[AnyVal]] = List(List(1), List(2.0))

Wow, ok. I expected it to work a bit better than papering over it in such a skin-deep manner.

Imho, then the current situation is considerably worse, if it doesn't even work consistently in any non-superficial example.

martin odersky

unread,
Jun 11, 2013, 9:39:11 AM6/11/13
to scala-internals
On Mon, Jun 10, 2013 at 10:52 PM, Paul Phillips <paulp@improving.org> wrote:

On Mon, Jun 10, 2013 at 9:09 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar.

As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].


I sympathize with that point of view, but we don't have a choice in the matter. Inference infers what it infers under general rules. And sometimes AnyVal is the best type according to the rules. The corner-stone of these rules is that type inference should infer the best solution of a constraint system, according to some partial order. The partial order is weak subtyping, i.e. subtyping augmented by numeric widening (*). I see the attraction of simplifying the partial order and go to plain subtyping instead. But then we _will_ infer List[AnyVal] as the type of List(1, 2.0).

Now, one could say let's just add extra rules and tweaks to make cases that we think make no sense go away. These tweaks always look good first. But in the long run they doom you, because your type checking becomes an unpredictable mess of interacting tweaks, each of which is harmless in isolation.

Cheers

 - Martin 

(*) I know that we make an exception to that principle  in that we truncate lubs and glbs in ad-hoc ways to keep the types from exploding. It's something I am working hard to get rid of. By no means should we take it as a precedence to add more ad-hoc stuff to type inference.


Paul Phillips

unread,
Jun 11, 2013, 9:54:29 AM6/11/13
to scala-i...@googlegroups.com

On Tue, Jun 11, 2013 at 9:39 AM, martin odersky <martin....@epfl.ch> wrote:
but we don't have a choice in the matter

One of us doesn't have a choice in the matter, and the other doesn't want a choice in the matter, but neither of us is required by some immutable law not to have a choice in the matter.

At its simplest, all we have to do is not infer types beyond a certain level of generality, on the basis that inferring such types enables far more errors than it prevents. Not inferring Any or AnyVal does not require an assortment of ad hoc rules. It only requires not inferring those types.


Francois

unread,
Jun 11, 2013, 10:01:10 AM6/11/13
to scala-i...@googlegroups.com, martin odersky
On 11/06/2013 15:39, martin odersky wrote:



On Mon, Jun 10, 2013 at 10:52 PM, Paul Phillips <paulp@improving.org> wrote:

On Mon, Jun 10, 2013 at 9:09 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar.

As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].


I sympathize with that point of view, but we don't have a choice in the matter. Inference infers what it infers under general rules. And sometimes AnyVal is the best type according to the rules. The corner-stone of these rules is that type inference should infer the best solution of a constraint system, according to some partial order. The partial order is weak subtyping, i.e. subtyping augmented by numeric widening (*). I see the attraction of simplifying the partial order and go to plain subtyping instead. But then we _will_ infer List[AnyVal] as the type of List(1, 2.0).

Now, one could say let's just add extra rules and tweaks to make cases that we think make no sense go away. These tweaks always look good first. But in the long run they doom you, because your type checking becomes an unpredictable mess of interacting tweaks, each of which is harmless in isolation.


Sorry for widelly appearing in that discussion as a simple observer, but I believe that Paul is just trying to say that inferencing for certain too general type is not only useless but even harmful for the coder, and I believe that the types in question are only Any/Anyref/Anyval.
Personnaly, each (really very very rare time) I use these types, I take care of making them explicit and comment why it's not an error. And in ~7 years of Scala, I can't remember even one time I wished to get these types infered (and not simply forget to change one object type in a method or when creating a collection).
But again, I surelly don't see all the implication of such a rule.

Cheers,

--
Francois ARMAND
http://fanf42.blogspot.com
http://www.normation.com

Francois

unread,
Jun 11, 2013, 10:02:08 AM6/11/13
to scala-i...@googlegroups.com, Paul Phillips
Sooo you were far faster than me to write an email.

Matthew Pocock

unread,
Jun 11, 2013, 10:09:36 AM6/11/13
to scala-i...@googlegroups.com

I have to agree. There's not been a single case where an inference of the upper types is the intended type in my code. It is always a code smell and usually a mistake. Very occasionally I need to explicitly work with one of these as a return value but usually this is a big red flag that I have the wrong type parameters or am missing a type class.

Sent from my android - may contain predictive text entertainment.

Paul Phillips

unread,
Jun 11, 2013, 10:46:41 AM6/11/13
to scala-i...@googlegroups.com

On Tue, Jun 11, 2013 at 7:30 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
Wow, ok. I expected it to work a bit better than papering over it in such a skin-deep manner.

Here's another way of looking at it.

scala> def f(x: Int, y: Long) = List(x, y)
f: (x: Int, y: Long)List[Long]

scala> def f[T >: Int <: Int, U >: Long <: Long](x: T, y: U) = List(x, y)
f: [T >: Int <: Int, U >: Long <: Long](x: T, y: U)List[AnyVal]

I wonder how often behavior diverges between concrete type A and parameterized type T >: A <: A. It seems like an undesirable quality in a type system.

Jason Zaugg

unread,
Jun 11, 2013, 10:54:59 AM6/11/13
to scala-i...@googlegroups.com
The problem I see with this (outside of the original problem of List(1, 2d)) is that such a rule will reveal the next level of "useless" types, e.g:

List(Person("Bob"), Widget("wizzle")) // List[Serializable with Product] 

-jason

Paul Phillips

unread,
Jun 11, 2013, 11:00:58 AM6/11/13
to scala-i...@googlegroups.com
On Tue, Jun 11, 2013 at 10:54 AM, Jason Zaugg <jza...@gmail.com> wrote:
The problem I see with this (outside of the original problem of List(1, 2d)) is that such a rule will reveal the next level of "useless" types, e.g:

List(Person("Bob"), Widget("wizzle")) // List[Serializable with Product] 

Indeed, this is the best objection of which I am aware.

It's interesting that we all know this is a useless type to infer. Me, I think it would be possible to translate the "useless inference" intuition into code, and/or allow the end programmer to influence the definition of useless. I think a type system which took steps not to make useless inferences would be a far superior type system. I understand that some people value generality above all else, but I am not among them.

That said, most things we do don't completely solve most problems, but we still do things.

 

martin odersky

unread,
Jun 11, 2013, 11:23:41 AM6/11/13
to scala-internals
On Tue, Jun 11, 2013 at 4:54 PM, Jason Zaugg <jza...@gmail.com> wrote:
On Tue, Jun 11, 2013 at 9:54 AM, Paul Phillips <pa...@improving.org> wrote:

On Tue, Jun 11, 2013 at 9:39 AM, martin odersky <martin....@epfl.ch> wrote:
but we don't have a choice in the matter

One of us doesn't have a choice in the matter, and the other doesn't want a choice in the matter, but neither of us is required by some immutable law not to have a choice in the matter.

At its simplest, all we have to do is not infer types beyond a certain level of generality, on the basis that inferring such types enables far more errors than it prevents. Not inferring Any or AnyVal does not require an assortment of ad hoc rules. It only requires not inferring those types.

 
I personally used quite often List[Any] as a type. It's simply not feasible to "know" that these types are useless. Well, I agree that List[AnyVal] is pretty useless, but List[Any] is definitely not. And, who knows, maybe the user is happy to have a List[Any] inferred, and would not mind to get a List[AnyVal] because they erase to the same type.
So one might prefer List[AnyVal] to an annoying warning or error.

Also, as Jason says, it does not stop with these types. In the end, how do you choose?

I it's not a question I personally want to tackle now. To stay sane with type inference (and I am not saying we are there yet!) you have to ruthlessly simplify. There's a constant tension between keeping things clear and catering to engineering concerns. I believe Scala has erred a bit too much towards catering to engineering concerns, at the expense of greatly complicating its model of types and type inference. Right now, I would hope the pendulum can swing a bit in the other direction, towards clear foundations supported by a faithful compiler.

Cheers

 - Martin

martin odersky

unread,
Jun 11, 2013, 11:25:27 AM6/11/13
to scala-internals
On Tue, Jun 11, 2013 at 5:23 PM, martin odersky <martin....@epfl.ch> wrote:



On Tue, Jun 11, 2013 at 4:54 PM, Jason Zaugg <jza...@gmail.com> wrote:
On Tue, Jun 11, 2013 at 9:54 AM, Paul Phillips <pa...@improving.org> wrote:

On Tue, Jun 11, 2013 at 9:39 AM, martin odersky <martin....@epfl.ch> wrote:
but we don't have a choice in the matter

One of us doesn't have a choice in the matter, and the other doesn't want a choice in the matter, but neither of us is required by some immutable law not to have a choice in the matter.

At its simplest, all we have to do is not infer types beyond a certain level of generality, on the basis that inferring such types enables far more errors than it prevents. Not inferring Any or AnyVal does not require an assortment of ad hoc rules. It only requires not inferring those types.

 
I personally used quite often List[Any] as a type. It's simply not feasible to "know" that these types are useless. Well, I agree that List[AnyVal] is pretty useless, but List[Any] is definitely not. And, who knows, maybe the user is happy to have a List[Any] inferred, and would not mind to get a List[AnyVal] because they erase to the same type.
So one might prefer List[AnyVal] to an annoying warning or error.

Also, even if people use List[Any] only rarely, what about Any => T?  

Paul Phillips

unread,
Jun 11, 2013, 12:15:05 PM6/11/13
to scala-i...@googlegroups.com
On Tue, Jun 11, 2013 at 11:25 AM, martin odersky <martin....@epfl.ch> wrote:
Also, even if people use List[Any] only rarely, what about Any => T?  

What about it? Nobody is suggesting removing Any from the language, only not inferring it. If you use Any => T, or List[Any], with such frequency that you would especially notice if it weren't inferred, then you (generic you) are doing it wrong.



Paul Phillips

unread,
Jun 11, 2013, 12:16:11 PM6/11/13
to scala-i...@googlegroups.com
Or if your point was that it's in contravariant position, the good news is that prohibiting the inference of Any is JUST AS USEFUL here!

martin odersky

unread,
Jun 11, 2013, 12:36:29 PM6/11/13
to scala-internals
You are right, it does not add anything new; it's the same question: should the compiler throw out certain inferred types? 

I'm still skeptical we can come up with an actual definition of "undesirable" types that does not more harm than good.
And for reasons stated I am against at just picking at some types with ad-hoc rules.

 - Martin



 


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

Paul Phillips

unread,
Jun 11, 2013, 12:33:16 PM6/11/13
to scala-i...@googlegroups.com

On Tue, Jun 11, 2013 at 11:23 AM, martin odersky <martin....@epfl.ch> wrote:
I personally used quite often List[Any] as a type.

Can I ask why? Here are the use cases with which I am familiar with I would consider plausibly valid:

 - general purpose serializer/pickler code
 - laziness wrapper around List[String] (as its elements will only be printed, but we are delaying the .toString calls)

My list runs out around there, and one of them is questionable (there is little reason it should be a List[Any] over a List[T].) Are these applications so frequent, or are there so many others, that they cannot handle the burden of specifying List[Any] ?

I understand your primary argument is based on the desire for total generality, not that List[Any] is so important; but, I question the List[Any] argument in its entirety.

martin odersky

unread,
Jun 11, 2013, 12:44:17 PM6/11/13
to scala-internals
On Tue, Jun 11, 2013 at 6:33 PM, Paul Phillips <pa...@improving.org> wrote:

On Tue, Jun 11, 2013 at 11:23 AM, martin odersky <martin....@epfl.ch> wrote:
I personally used quite often List[Any] as a type.

Can I ask why? Here are the use cases with which I am familiar with I would consider plausibly valid:

 - general purpose serializer/pickler code
 - laziness wrapper around List[String] (as its elements will only be printed, but we are delaying the .toString calls)

Basically anywhere you need to deal with sequences of unityped data. Could be a LIsp interpreter, a pickler, or a communications protocol. Typically, you'd then regain the types at the end points using pattern matching.

Cheers

 - Martin

Paul Phillips

unread,
Jun 11, 2013, 1:06:23 PM6/11/13
to scala-i...@googlegroups.com
Just for fun I implemented this in the last few minutes. I don't know why we should struggle over these matters for the rest of our lives, just give those of us who want to run a tighter ship SOME way in.

% qscala -Dscala.guidance=scala.reflect.api.NoAnyGuidance
Welcome to Scala version 2.11.0-20130609-163556-16c31f4923 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_45).
Type in expressions to have them evaluated.
Type :help for more information.

scala> def f(x: Boolean) = if (x) "a" else 1
<console>:7: error: Inference guidance prohibits inference of Any
       def f(x: Boolean) = if (x) "a" else 1
                           ^

scala> def f(x: Boolean): Any = if (x) "a" else 1
f: (x: Boolean)Any


nafg

unread,
Jun 12, 2013, 5:48:43 PM6/12/13
to scala-i...@googlegroups.com


On Tuesday, June 11, 2013 9:39:11 AM UTC-4, martin wrote:



On Mon, Jun 10, 2013 at 10:52 PM, Paul Phillips <paulp@improving.org> wrote:

On Mon, Jun 10, 2013 at 9:09 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
In the end, I don't think we will end up with tons of List[AnyVal]s or similar.

As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].


I sympathize with that point of view, but we don't have a choice in the matter. Inference infers what it infers under general rules. And sometimes AnyVal is the best type according to the rules. The corner-stone of these rules is that type inference should infer the best solution of a constraint system, according to some partial order. The partial order is weak subtyping, i.e. subtyping augmented by numeric widening (*). I see the attraction of simplifying the partial order and go to plain subtyping instead. But then we _will_ infer List[AnyVal] as the type of List(1, 2.0).

Now, one could say let's just add extra rules and tweaks to make cases that we think make no sense go away. These tweaks always look good first. But in the long run they doom you, because your type checking becomes an unpredictable mess of interacting tweaks, each of which is harmless in isolation.

Maybe I missed something, but if the objective here is to prevent programmer error, why would that need to affect type inference rules? Let type inference stay general, the way they are --- but couldn't you still mark it as an error or warning when a certain type was inferred? I don't see that as clouding the actual inference rules.

I'm not taking any side in the actual question, just wondering.

Paul Phillips

unread,
Jun 12, 2013, 10:10:10 PM6/12/13
to scala-i...@googlegroups.com

On Wed, Jun 12, 2013 at 5:48 PM, nafg <nafto...@gmail.com> wrote:
but couldn't you still mark it as an error or warning when a certain type was inferred?

In case there is any ambiguity, when I say "don't infer Any", that is what I mean.

Lex Spoon

unread,
Jun 20, 2013, 4:35:01 PM6/20/13
to scala-i...@googlegroups.com
Here's a crisp rule that looks interesting:

Never infer a lub as a type or type parameter. Infer the max, or
emit a type error.

By "max" I mean the type in the list, if there is one, that is a weak
supertype of all the others.

The main intuition behind this rule is that code like List(1, 2.0),
with no expected type, is fundamentally fragile. Yes, that particular
list might be okay, but what if they write List(orange, apple,
alpaca)? If you believe in type checking as a routine part of software
engineering, then it should be disturbing that no matter what a user
puts in the list, the type checker will say ay okay.

Based on this rule, List(1, 2.0) would be fine, and would be a
List[Double]. List(orange, alpaca) would be an error; you'd have to
write List[Any](orange, alpaca). If this pattern generalizes, then the
rule causes the compiler to decline to infer useless types. Yet, it
does so without a special case for AnyVal or Serializable or anything
else.

Well, it is an off the cuff idea. Maybe it breaks down on some killer
example. It looked interesting, though.

Lex

Simon Ochsenreither

unread,
Jun 20, 2013, 5:20:24 PM6/20/13
to scala-i...@googlegroups.com

Based on this rule, List(1, 2.0) would be fine, and would be a
List[Double].

Wasn't the idea which started this thread to have less special-casing and less dangerous implicit conversions?

Rex Kerr

unread,
Jun 20, 2013, 6:27:03 PM6/20/13
to scala-i...@googlegroups.com
if (p) Some(x) else None

Left(x) :: Right(y) :: Nil

sealed trait Foo
case class Bar(i: Int) extends Foo
case class Baz(s: String) extends Foo
def next(f: Foo) = f match {
  case Bar(i) => Bar(i+1)
  case Baz(s) => Baz(s+" ")
}

  --Rex




Paul Phillips

unread,
Jun 20, 2013, 7:56:25 PM6/20/13
to scala-i...@googlegroups.com
Of course we notice what those cases all have in common. Allowing for inferring a sealed common supertype makes all kinds of sense. 

Pavel Pavlov

unread,
Jun 21, 2013, 2:09:01 AM6/21/13
to scala-i...@googlegroups.com
@WarnIfInferred trait Product
Will this help?

In other words, what if we leave inference algorithm as is, check its results and then issue warnings/errors guided by user-defined annotations?

As for numeric widening I believe that literals and variables should be treated differently.
The key question here: what is the type of numeric literal?
I think that 1.type is not (should not be) exactly Int, but rather 1.type <: Int with Long with Float with Double.
Then lub(1.type, 2.0.type) == Double and we'll get natural widening for numeric literals.
Such typing will result in:
List(1, 2.0): List[Double]
List(List(1), List(2.0)): List[List[Double]]

With this all widening conversions for non-literals can be removed without regret.

What do you think?

Simon Ochsenreither

unread,
Jun 21, 2013, 7:03:03 AM6/21/13
to scala-i...@googlegroups.com

Then lub(1.type, 2.0.type) == Double and we'll get natural widening for numeric literals.
Such typing will result in:
List(1, 2.0): List[Double]
List(List(1), List(2.0)): List[List[Double]]

The reason why I want to get rid of the implicit widening conversions in the first place is because I want to get rid of exactly those dangerous “natural” widenings, which destroy your precision without any warning.

Lex Spoon

unread,
Jun 21, 2013, 8:48:41 AM6/21/13
to scala-i...@googlegroups.com
Simon, yes, it is a different topic to the one you original raised.
For some reason people started talking about inference of Any or
AnyVal, so I joined in.

The decision about int-to-float is orthogonal to inference of
Any/AnyVal. If the decision is to remove all automatic conversion of
int-to-float, then List(1, 2.0) would stop type checking, because
neither type is even a weak supertype of the other.

Rex, good ones! It does look helpful to infer the root of a sealed
hierarchy, especially for option. That's a weakness of using max
instead of lub.

Pavel, an annotation looks like a good engineering solution to the
problem. It does swing that pendulum Martin describes away from a
minimal type checker and toward engineering concerns, but at least
it's a clean and general solution.

Treating 1.type as ambiguous about with number type it is does look
plausible, but it also looks rather complicated to explain.

Lex

Matthew Pocock

unread,
Jun 21, 2013, 9:10:33 AM6/21/13
to scala-i...@googlegroups.com
On 21 June 2013 13:48, Lex Spoon <l...@lexspoon.org> wrote:

Treating 1.type as ambiguous about with number type it is does look
plausible, but it also looks rather complicated to explain.

This is closer, in effect if not type theory, to how haskell treats numeric literals. I have the feeling that anything we do in this area will be confusing to annoying to some group of users. I'd prefer a solution where those annoyed by the default choice can at least recover the behaviour they'd like by including the appropriate import statements. In my scripty code, I want stuff to just work - scripts are debugged as you write and run them and the speed of writing is paramount. In numerical code, I most definitely want List(1, 2.0) and any variant involving literals or variables not to type-check. This code is a pain to get right, and bugs can remain hidden for ever.

Matthew
 

Lex

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.





--
Dr Matthew Pocock
Turing ate my hamster LTD

Integrative Bioinformatics Group, School of Computing Science, Newcastle University

skype: matthew.pocock
tel: (0191) 2566550

Simon Ochsenreither

unread,
Jun 21, 2013, 10:42:08 AM6/21/13
to scala-i...@googlegroups.com

Simon, yes, it is a different topic to the one you original raised.
For some reason people started talking about inference of Any or
AnyVal, so I joined in.

The decision about int-to-float is orthogonal to inference of
Any/AnyVal. If the decision is to remove all automatic conversion of
int-to-float, then List(1, 2.0) would stop type checking, because
neither type is even a weak supertype of the other.

List(1, 2.0) won't stop type-checking. It will pick the most precise type for that expression which the compiler can compute, which currently is List[AnyVal].
There is zero difference to List(someInstance, anotherInstance) where a and b don't share any traits. The result type will be List[AnyRef].

But considering the fact that there aren't many useful things one can do with those top types, union types were brought up, which solve this issues in probably the best possible way: by enabling the compiler to compute the type more precisely than before, in a way which feels more natural to users.

Paul Phillips

unread,
Jun 22, 2013, 2:23:05 PM6/22/13
to scala-i...@googlegroups.com

On Fri, Jun 21, 2013 at 7:42 AM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
The decision about int-to-float is orthogonal to inference of
Any/AnyVal. If the decision is to remove all automatic conversion of
int-to-float, then List(1, 2.0) would stop type checking, because
neither type is even a weak supertype of the other.

List(1, 2.0) won't stop type-checking. It will pick the most precise type for that expression which the compiler can compute, which currently is List[AnyVal].

The meaning of "would stop type checking" is in the context of "if we adopted the proposed rule", under which it would fail to type check rather than infer AnyVal.

Paolo G. Giarrusso

unread,
Jun 23, 2013, 5:56:59 AM6/23/13
to scala-i...@googlegroups.com
So you'd even oppose making literals polymorphic. Hence right now there are three positions about widening:

1. status quo: unrestricted widening
2. allow widening only for literals
3. prevent widening.

Before this message, I thought that only 1 and 2 were represented.

About one technique for implementing position 2, Pavel Pavlov wrote:
> I think that 1.type is not (should not be) exactly Int, but rather 1.type <: Int with Long with Float with Double.

I think that's an interesting idea, but unfortunately I don't think it works well enough. One example of the Haskell-like complications you get is 

val x = 1

After type inference without a special case, I guess you get:

val x: Int with Long with Float with Double = 1

And that doesn't sound desirable. The equivalent to Haskell would be [T: Numeric] => T. However, that's not really equivalent: in Haskell, if you later use x in a context requiring an Int, then x will instead get type Int. Hence, what we'd have in Scala would be strictly less convenient to use.

If you instead still want to infer Int there, you'd need a special case in type inference, and I don't think there's an elegant way to specify it.
Right now, having a special case allowing to coerce literals (as we do right now) sounds less messy, and works (to some extent) in practice.

Paolo G. Giarrusso

unread,
Jun 23, 2013, 6:31:34 AM6/23/13
to scala-i...@googlegroups.com


On Friday, June 21, 2013 2:48:41 PM UTC+2, lexspoon wrote:
Simon, yes, it is a different topic to the one you original raised.
For some reason people started talking about inference of Any or
AnyVal, so I joined in.

The decision about int-to-float is orthogonal to inference of
Any/AnyVal. If the decision is to remove all automatic conversion of
int-to-float, then List(1, 2.0) would stop type checking, because
neither type is even a weak supertype of the other.

Rex, good ones! It does look helpful to infer the root of a sealed
hierarchy, especially for option. That's a weakness of using max
instead of lub.

Also, why restrict this just to a sealed hierarchy? I sure want to transform expressions of some DSL (and would like type inference to help there), but I never seal the datatype of expression, since in my scenarios it should be extensible.
 
Pavel, an annotation looks like a good engineering solution to the
problem. It does swing that pendulum Martin describes away from a
minimal type checker and toward engineering concerns, but at least
it's a clean and general solution.
 
I agree. Moreover, I think it's even less problematic than you suggest. What makes things much simpler is that the results of the type inference algorithm itself would be unchanged. Many warnings are most typically not formalized anyway, they are outside the "critical" part of the compiler.

Simon Ochsenreither

unread,
Oct 29, 2013, 3:52:21 PM10/29/13
to scala-i...@googlegroups.com
Ok, reviving this to suggest some common ground.

I think we all agree that
  • those integral-to-floating-point conversions are harmful
  • but receiving  AnyVal when combining different numeric types is not the way to go either

From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.

So what about adding a warning in the sense of Ywarn-numeric-widen, but only for integral-to-floating-point conversions, and including it in Xlint or Xdev? This way, we warn people that something potentially dangerous is happening and decrease the amount of affected code if we ever decide to infer a better/different type for code which combines different number types in the future (union types or some other approach).

From a long-term evolution POV, this would also simplify the integration of larger numeric types like 128/256 bit numbers when/if they arrive.

Hanns Holger Rutz

unread,
Oct 29, 2013, 4:02:46 PM10/29/13
to scala-i...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I think Int -> Float should still be accepted.

1. Floating point is per definition a limited precision format
2. It would suck, specifically for domain specific uses, if `foo(42)`
gives me a warning because `foo` has a float argument. Having to write
`foo(42f)` is visual noise and annoying.

I cannot think of a situation where Long -> Float would make sense.

A compromise re Int -> Float could be that only literals are accepted
without warning, and only if they can be represented without loss
(x.toFloat.toInt == x); this happens for x > 0x01000000 || x < -0x01000000

best, .h.h.




On 29/10/2013 20:52, Simon Ochsenreither wrote:
> Ok, reviving this to suggest some common ground.
>
> I think we all agree that
>
> * those integral-to-floating-point conversions are harmful * but
> receiving AnyVal when combining different numeric types is not the
> way to go either
>
> From my POV, union types will help tremendously here to give users
> a simple, descriptive, principal type, but union types are not
> there yet.
>
> So what about adding a warning in the sense of Ywarn-numeric-widen,
> but only for integral-to-floating-point conversions, and including
> it in Xlint or Xdev? This way, we warn people that something
> potentially dangerous is happening and decrease the amount of
> affected code if we ever decide to infer a better/different type
> for code which combines different number types in the future (union
> types or some other approach).
>
> From a long-term evolution POV, this would also simplify the
> integration of larger numeric types like 128/256 bit numbers
> when/if they arrive.
>
> -- You received this message because you are subscribed to the
> Google Groups "scala-internals" group. To unsubscribe from this
> group and stop receiving emails from it, send an email to
> scala-interna...@googlegroups.com. For more options,
> visit https://groups.google.com/groups/opt_out.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.20 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJScBRmAAoJEKZFmaPaYk6QbR0P/ipy+OoGXmm0lxAd31fBCMa8
J0MgfI5vCWp9qeiJ75Ve0B0EzALp/Tloam05ZXDCg6SyHooGi0mBexA+i2I60e3d
Nj420LBgnnwzqmLnijMg5k+flyGMeMZ2IcjHue/m/+u8fQhg8Tn5IwoO9i4TN0da
xwB22lonTi45/9FmCPBKCyj3qsqVW7ueJR+0O5PkPp03eSdb2WJtpVfLboIVlP2Q
4MUT5VTFijUdfIQkAmF0+xiI2VrlZb4hY1UXSBuEQoeI1ZQmJvrE7876W3+rlImZ
KrnrMp3saydVey2XgVt1atfA6vzg9GzC5xcTXSTa5vnvDtGpMDJxFCBcoUF76Nra
SOhpu66G5KCj5vMd6tRnvbq3fha2TqYPoJCzxZ7oMqAsB4wdOOGg7G5cEdJRj+wC
9Qda/VEW8O4O1NlGFqKDCaiVlpLiUPe33RF8WbCZdpF4KhJ5kIjH77cLiq8IXow+
uw9lG9WUDsTiqYke39B3Bcaw6UDVkN/g+tk30mGnR7T2Nh7LckwUIxwGOf6i9VlU
hD5r0FGrQRGSdm8m5s4tfW67ljVlYaEJw9a+XHx5YcOjEl53ZTlULoxxrc+L+LeY
fy7XyBriO4K+YMh29tGGIRYq1WgBdEQ62Ca3SJ9SYgitL7Tu8qemTl8+Dd+0mcdV
apXnT8fjFP5hEHmpaapp
=p8BV
-----END PGP SIGNATURE-----

Paul Phillips

unread,
Oct 29, 2013, 4:29:07 PM10/29/13
to scala-i...@googlegroups.com

On Oct 29, 2013, at 1:02 PM, Hanns Holger Rutz <con...@sciss.de> wrote:

> 1. Floating point is per definition a limited precision format
> 2. It would suck, specifically for domain specific uses, if `foo(42)`
> gives me a warning because `foo` has a float argument. Having to write
> `foo(42f)` is visual noise and annoying.

There is no set of defaults which will make everyone happy. All you really have to do is give people who care about correctness a way to opt into correctness. They will do it, because they care about correctness. This is their eternal burden. The least you can do is exploit it.

Hanns Holger Rutz

unread,
Oct 29, 2013, 4:33:30 PM10/29/13