something has changed between 2.10 and now
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.Matthew
On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.MatthewI'd heartily agree, except for this:scala-hypothetical> val x = List(1, 2.0)scala-hypothetical> x: List[AnyVal]Not very intuitive...
On 10 June 2013 14:46, martin odersky <martin....@epfl.ch> wrote:
On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.MatthewI'd heartily agree, except for this:scala-hypothetical> val x = List(1, 2.0)scala-hypothetical> x: List[AnyVal]Not very intuitive...Sure, but then choosing to mangle this into a List[Int] or List[Double] in this case is also not very intuitive (queue wars about what intuitive means and how to measure it). I'd argue that this kind of thing is better being caught by lint/codecheck tooling. When in sane applications do you actually want a list of some generic type such as AnyVal or AnyRef? It's nearly always a mistake.
It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int. So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.
Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.
In the end, I don't think we will end up with tons of List[AnyVal]s or similar. (Imho) the main idea behind getting rid of implicit widening conversions is that we want people to be explicit about potentially lossy conversions, both to increase the clarity of the code and reduce the potential of bugs.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
It's definitely List[Double], not List[Int]. Just like 1 * 2.0 gives a Double, not an Int. So, List[Double] is the only sane type. It's not that we did not have the alternative before. Scala did not always have numeric widening, so the type of List(1, 2.0) was List[AnyVal] up to around Scala 2.7 (forgot when exactly the change was introduced). But some users found the old behavior was a bug, and I can't blame them.
I don't think that the current behavior is less buggy:
scala> List(123456789, 0f)
res23: List[Float] = List(1.23456792E8, 0.0)
Generally, the fact that Scala type inference works surprisingly well in practice is the result of many, many small tweaks like this one. I don't like the tweaks but I like the fact that type inference works pretty well in most cases.
Agree. In the end, it's about exchanging not-so-nice corner-cases with other not-so-nice corner-cases. But if we can get rid of implicit widening conversion, I don't think it is a zero-sum game anymore.
If people mix different number types the compiler should give them a warning if AnyVal is inferred (we have plenty of diagnostics for Nothing, Unit and Any already).
Not running into weird ambiguities in method overloading resolution would be a nice benefit, too.
I think source compatibility is something we should treat seriously. As Martin explained, both options were tried and we decided long time ago to stick to lossy conversions. In particular, we didn't end up in current situation by accident.I believe it's beating a dead horse, now.
Not sure I understand. Are you saying you will also throw out expressions like 123456789 * 1.0?I don't believe you can do that and not break any sort of numeric code written in Scala.
For better or worse, Java has an implicit widening from Int (and even Long!) to Float. I believe we have much better things to do than to go back on this one and discuss whether or not it's the right thing to do. I don't really care about the widening, but I do care about needlessly breaking huge amounts of code.So, IMO, the only question is whether the widening is applied to type parameter inference or it is purely modeled by overloaded methods and implicit conversions, as we did in the early days of Scala. Not applying it to type inference would simplify things in spec and compiler quite a bit.But it's not so simple to go back., as I have already outlined.
Anyway, I don't see how it will break code. Uses (which are probably very limited) of lossy conversions might generate a deprecation warning, just like those hundreds of other deprecations I have seen since 2.7.
What about -Ywarn-numeric-widen? Would it help more to have it turned into a supported option (without a Y prefix)? Or do you require also having -Yno-numeric-widen (with/without the Y prefix)? Because probably it's easier to lobby for that, or get a pull request accepted (at least with the -Y).
import language.doCrazyStuffWithNumbers
My guess is that the people annoyed by this behavior are mostly advanced users (or at least, non-beginners), so having them do the extra work of adding an option might be a reasonable compromise. I don't know if there's a beginner-friendly policy for Scala, but I think its less-steep-than-Haskell learning curve is a reason for its success (though I don't know which language is easier for advanced users, if we ignore the huge extensibility advantage of Scala).
scala-hypothetical> val x = List(1, 2.0)scala-hypothetical> x: List[AnyVal]Not very intuitive...
Moving -Ywarn-numeric-widen to -X is fine, though, for those who write more correct code when they have to hit the d key more often.
--
Maybe the caricature is too unkind. If so, sorry; I'm letting extraneous frustrations leak through to here. And I do see the problems with automatic Long->Float especially. Augh. Who ever thought that was a good idea?
--but it's now so well established that it probably needs to be supported approximately forever.
I didn't really mean it as a put-down, though, because in practice this is exactly what it boils down to: run through your code hitting d in all the right places.
Which might be reasonable for advanced users, but it does keep getting in the way, and it's indeed hard for beginners (so much somebody maintained for years a Haskell for beginners without this feature, called Helium).
On the other other hand, there's this... even less intuitive given that it worked at level one.scala> List(List(1), List(2.0))res0: List[List[AnyVal]] = List(List(1), List(2.0))
As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].
but we don't have a choice in the matter
On Mon, Jun 10, 2013 at 10:52 PM, Paul Phillips <paulp@improving.org> wrote:
As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].
I sympathize with that point of view, but we don't have a choice in the matter. Inference infers what it infers under general rules. And sometimes AnyVal is the best type according to the rules. The corner-stone of these rules is that type inference should infer the best solution of a constraint system, according to some partial order. The partial order is weak subtyping, i.e. subtyping augmented by numeric widening (*). I see the attraction of simplifying the partial order and go to plain subtyping instead. But then we _will_ infer List[AnyVal] as the type of List(1, 2.0).
Now, one could say let's just add extra rules and tweaks to make cases that we think make no sense go away. These tweaks always look good first. But in the long run they doom you, because your type checking becomes an unpredictable mess of interacting tweaks, each of which is harmless in isolation.
-- Francois ARMAND http://fanf42.blogspot.com http://www.normation.com
I have to agree. There's not been a single case where an inference of the upper types is the intended type in my code. It is always a code smell and usually a mistake. Very occasionally I need to explicitly work with one of these as a return value but usually this is a big red flag that I have the wrong type parameters or am missing a type class.
Sent from my android - may contain predictive text entertainment.
Wow, ok. I expected it to work a bit better than papering over it in such a skin-deep manner.
The problem I see with this (outside of the original problem of List(1, 2d)) is that such a rule will reveal the next level of "useless" types, e.g:List(Person("Bob"), Widget("wizzle")) // List[Serializable with Product]
On Tue, Jun 11, 2013 at 9:54 AM, Paul Phillips <pa...@improving.org> wrote:
On Tue, Jun 11, 2013 at 9:39 AM, martin odersky <martin....@epfl.ch> wrote:
but we don't have a choice in the matterOne of us doesn't have a choice in the matter, and the other doesn't want a choice in the matter, but neither of us is required by some immutable law not to have a choice in the matter.
At its simplest, all we have to do is not infer types beyond a certain level of generality, on the basis that inferring such types enables far more errors than it prevents. Not inferring Any or AnyVal does not require an assortment of ad hoc rules. It only requires not inferring those types.
On Tue, Jun 11, 2013 at 4:54 PM, Jason Zaugg <jza...@gmail.com> wrote:
On Tue, Jun 11, 2013 at 9:54 AM, Paul Phillips <pa...@improving.org> wrote:
On Tue, Jun 11, 2013 at 9:39 AM, martin odersky <martin....@epfl.ch> wrote:
but we don't have a choice in the matterOne of us doesn't have a choice in the matter, and the other doesn't want a choice in the matter, but neither of us is required by some immutable law not to have a choice in the matter.
At its simplest, all we have to do is not infer types beyond a certain level of generality, on the basis that inferring such types enables far more errors than it prevents. Not inferring Any or AnyVal does not require an assortment of ad hoc rules. It only requires not inferring those types.I personally used quite often List[Any] as a type. It's simply not feasible to "know" that these types are useless. Well, I agree that List[AnyVal] is pretty useless, but List[Any] is definitely not. And, who knows, maybe the user is happy to have a List[Any] inferred, and would not mind to get a List[AnyVal] because they erase to the same type.So one might prefer List[AnyVal] to an annoying warning or error.
Also, even if people use List[Any] only rarely, what about Any => T?
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I personally used quite often List[Any] as a type.
Can I ask why? Here are the use cases with which I am familiar with I would consider plausibly valid:On Tue, Jun 11, 2013 at 11:23 AM, martin odersky <martin....@epfl.ch> wrote:
I personally used quite often List[Any] as a type.- general purpose serializer/pickler code- laziness wrapper around List[String] (as its elements will only be printed, but we are delaying the .toString calls)
On Mon, Jun 10, 2013 at 10:52 PM, Paul Phillips <paulp@improving.org> wrote:
As I've mentioned a time or hundred, we should NEVER infer AnyVal, regardless of whether anything comes out of this. It's always an error; if someone thinks otherwise, I question their command of errordom. If you really want a list of AnyVals (and a more useless list is hard to imagine) then the least you can do is ask for a List[AnyVal].I sympathize with that point of view, but we don't have a choice in the matter. Inference infers what it infers under general rules. And sometimes AnyVal is the best type according to the rules. The corner-stone of these rules is that type inference should infer the best solution of a constraint system, according to some partial order. The partial order is weak subtyping, i.e. subtyping augmented by numeric widening (*). I see the attraction of simplifying the partial order and go to plain subtyping instead. But then we _will_ infer List[AnyVal] as the type of List(1, 2.0).Now, one could say let's just add extra rules and tweaks to make cases that we think make no sense go away. These tweaks always look good first. But in the long run they doom you, because your type checking becomes an unpredictable mess of interacting tweaks, each of which is harmless in isolation.
but couldn't you still mark it as an error or warning when a certain type was inferred?
Based on this rule, List(1, 2.0) would be fine, and would be a
List[Double].
Then lub(1.type, 2.0.type) == Double and we'll get natural widening for numeric literals.
Such typing will result in:
List(1, 2.0): List[Double]
List(List(1), List(2.0)): List[List[Double]]
Treating 1.type as ambiguous about with number type it is does look
plausible, but it also looks rather complicated to explain.
Lex
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Simon, yes, it is a different topic to the one you original raised.
For some reason people started talking about inference of Any or
AnyVal, so I joined in.
The decision about int-to-float is orthogonal to inference of
Any/AnyVal. If the decision is to remove all automatic conversion of
int-to-float, then List(1, 2.0) would stop type checking, because
neither type is even a weak supertype of the other.
The decision about int-to-float is orthogonal to inference of
Any/AnyVal. If the decision is to remove all automatic conversion of
int-to-float, then List(1, 2.0) would stop type checking, because
neither type is even a weak supertype of the other.
List(1, 2.0) won't stop type-checking. It will pick the most precise type for that expression which the compiler can compute, which currently is List[AnyVal].
Simon, yes, it is a different topic to the one you original raised.
For some reason people started talking about inference of Any or
AnyVal, so I joined in.
The decision about int-to-float is orthogonal to inference of
Any/AnyVal. If the decision is to remove all automatic conversion of
int-to-float, then List(1, 2.0) would stop type checking, because
neither type is even a weak supertype of the other.
Rex, good ones! It does look helpful to infer the root of a sealed
hierarchy, especially for option. That's a weakness of using max
instead of lub.
Pavel, an annotation looks like a good engineering solution to the
problem. It does swing that pendulum Martin describes away from a
minimal type checker and toward engineering concerns, but at least
it's a clean and general solution.
From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.
So what about adding a warning in the sense of Ywarn-numeric-widen, but only for integral-to-floating-point conversions, and including it in Xlint or Xdev? This way, we warn people that something potentially dangerous is happening and decrease the amount of affected code if we ever decide to infer a better/different type for code which combines different number types in the future (union types or some other approach).
From a long-term evolution POV, this would also simplify the integration of larger numeric types like 128/256 bit numbers when/if they arrive.
2. It would suck, specifically for domain specific uses, if `foo(42)`
gives me a warning because `foo` has a float argument. Having to write
`foo(42f)` is visual noise and annoying.
Ok, reviving this to suggest some common ground.
I think we all agree that
- those integral-to-floating-point conversions are harmful
- but receiving AnyVal when combining different numeric types is not the way to go either
From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.
So what about adding a warning in the sense of Ywarn-numeric-widen, but only for integral-to-floating-point conversions, and including it in Xlint or Xdev? This way, we warn people that something potentially dangerous is happening and decrease the amount of affected code if we ever decide to infer a better/different type for code which combines different number types in the future (union types or some other approach).
From a long-term evolution POV, this would also simplify the integration of larger numeric types like 128/256 bit numbers when/if they arrive.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Ok, reviving this to suggest some common ground.
I think we all agree that
- those integral-to-floating-point conversions are harmful
- but receiving AnyVal when combining different numeric types is not the way to go either
From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.
So what about adding a warning in the sense of Ywarn-numeric-widen, but only for integral-to-floating-point conversions, and including it in Xlint or Xdev? This way, we warn people that something potentially dangerous is happening and decrease the amount of affected code if we ever decide to infer a better/different type for code which combines different number types in the future (union types or some other approach).
From a long-term evolution POV, this would also simplify the integration of larger numeric types like 128/256 bit numbers when/if they arrive.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.--
Martin OderskyChairman, Typesafe
The company for Reactive Apps on the JVM
- but receiving AnyVal when combining different numeric types is not the way to go either
From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.
And they have their own problems. E.g. if you are not careful, the inferred type ofif (???) Nil else 1 :: Nilwould be Nil | Cons[Int]. I don't think people would be happy to see this type all over the place where they got List[Int] until now. And it gets proportionally worse if you generalize this to ADTs with many branches. So it's quite likely that type inference will have to ignore union types, at least to some degree.
On Tuesday, October 29, 2013 2:13:28 PM UTC-7, martin wrote:
- but receiving AnyVal when combining different numeric types is not the way to go either
From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.
And they have their own problems. E.g. if you are not careful, the inferred type ofif (???) Nil else 1 :: Nilwould be Nil | Cons[Int]. I don't think people would be happy to see this type all over the place where they got List[Int] until now. And it gets proportionally worse if you generalize this to ADTs with many branches. So it's quite likely that type inference will have to ignore union types, at least to some degree.
List[Int] is a synonym of Nil | Cons[Int] in a union type universe
The human readable printing of inferred union types should use the simplest synonym in scope. With that, the above example is not a problem and users would see List[Int].
With extremely branchy ADTs it may be worse, but even a complicated union is better than AnyRef. I assume that best practices would evolve, and that the best practices for these cases would include providing synonyms for frequently encountered union combinations in an ADT space.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
On Wed, Oct 30, 2013 at 4:34 AM, Scott Carey <scott...@gmail.com> wrote:
On Tuesday, October 29, 2013 2:13:28 PM UTC-7, martin wrote:
- but receiving AnyVal when combining different numeric types is not the way to go either
From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.
And they have their own problems. E.g. if you are not careful, the inferred type ofif (???) Nil else 1 :: Nilwould be Nil | Cons[Int]. I don't think people would be happy to see this type all over the place where they got List[Int] until now. And it gets proportionally worse if you generalize this to ADTs with many branches. So it's quite likely that type inference will have to ignore union types, at least to some degree.
- those integral-to-floating-point conversions are harmful
I don't agree. One part of Scala's design philosophy is to leave some things alone. I am not super fond of Java's way of handling numeric expressions but when I set out to do Scala it was a conscious design philosophy to not try to change this. Why? Because Scala is about other things, such as clean and simple integration of OOP and FP . Some areas that are orthogonal to the main goals were adopted wholesale from Java. And that, IMO, was a good decision. You have to choose your battles.
Sorry, I did not get the -Xlint part on first reading, so misunderstood the gist of your mail in my answer.
I agree that under -Xlint it would make sense to have such a warning. Is there a way to configure -Xlint to control what kind of warnings should be emitted?
To clarify the intended use of these things, this would be -Xlint. The warnings issued under -Xdev are specifically intended for people monitoring the health of the compiler - they should be messages like “internal data structure Bippy has unexpected member Bloopy, that’s not good but I will struggle onward.”
- those integral-to-floating-point conversions are harmful
I don't agree. One part of Scala's design philosophy is to leave some things alone. I am not super fond of Java's way of handling numeric expressions but when I set out to do Scala it was a conscious design philosophy to not try to change this. Why? Because Scala is about other things, such as clean and simple integration of OOP and FP . Some areas that are orthogonal to the main goals were adopted wholesale from Java. And that, IMO, was a good decision. You have to choose your battles.
I'm all for not changing stuff for the sake of change, but even Java's designers say that it's a terrible mistake.
Scala has fixed tons of more minor annoyances we could have inherited from Java, so I don't think we should treat this as exempt from criticism.Sorry, I did not get the -Xlint part on first reading, so misunderstood the gist of your mail in my answer.
No problem, as mentioned I'm currently just interested in getting a decent warning.
I agree that under -Xlint it would make sense to have such a warning. Is there a way to configure -Xlint to control what kind of warnings should be emitted?
I think there isn't one (yet), but I think it would go against the intention of Xlint anyway. As far as I understand it, Xlint represents a bag of "common sense" warnings recommended for general use. Splitting that up into multiple options doesn't make much sense imho.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I'm all for not changing stuff for the sake of change, but even Java's designers say that it's a terrible mistake.Do you have a reference for that? I was not aware of that before.
> "It would be totally delightful to go through [Java] Puzzlers,
> another book that I wrote with Neal Gafter, which contains all
> the traps and pitfalls in the language and just excise them -
> one by one. Simply remove them.
>
> There are things that were just mistakes, so for example ...
> [misspeaks] ... int to float, is a primitive widening conversion
> and happens silently, but is lossy if you go from int to float
> and back to int.
> You often won't get the same int that you started with.
>
> Because, you know, floats, some of the bits are used for the
> exponent rather then the mantissa, so you loose precision.
> When you go to float and back to int you'll find that you didn't
> have the int you started with.
>
> So, you know, it was a mistake, it should corrected, it would
> break existing programs. So I do like the idea of essentially
> writing a new language which is very similar to Java which
> sort of fixes all these bad things. And if someone's to call it
> 'Java', that would be great, too. Just so long as traditional
> java source code can still be compiled and run against the
> latest VMs. [...]
>
> -- Joshua Bloch
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
On Mon, Jun 10, 2013 at 12:29 PM, Matthew Pocock <turingate...@gmail.com> wrote:
I'd love to see all widening shifted out from the language and into implicits that can be excluded from scope if necessary. It's a constant source of bugs to me and when it isn't, it's extra cognitive overhead, and I'd love to be able to work in code with all such conversions disabled.MatthewI'd heartily agree, except for this:scala-hypothetical> val x = List(1, 2.0)scala-hypothetical> x: List[AnyVal]
That's interesting. I was not aware of this. What about C#? I assume they repeated Java's rules?
csharp> var ls = new List<float> { 1, 2.0f };
csharp> ls;
{ 1, 2 }
csharp> ls.Add(123456789);
csharp> ls;
{ 1, 2, 1.234568E+08 }
On Wed, Oct 30, 2013 at 4:34 AM, Scott Carey <scott...@gmail.com> wrote:
On Tuesday, October 29, 2013 2:13:28 PM UTC-7, martin wrote:
- but receiving AnyVal when combining different numeric types is not the way to go either
From my POV, union types will help tremendously here to give users a simple, descriptive, principal type, but union types are not there yet.
And they have their own problems. E.g. if you are not careful, the inferred type ofif (???) Nil else 1 :: Nilwould be Nil | Cons[Int]. I don't think people would be happy to see this type all over the place where they got List[Int] until now. And it gets proportionally worse if you generalize this to ADTs with many branches. So it's quite likely that type inference will have to ignore union types, at least to some degree.
List[Int] is a synonym of Nil | Cons[Int] in a union type universeYou could argue that, but only because List happens to be sealed. Even then, it does not fall out automatically, you need an additional rule that takes sealedness into account. Put in other words, I do not see how List[Int] <: Nil | Cons[Int] would be derivable using the standard rules for union types. It's an interesting idea, though, which could help keeping types smaller.
val b = if (blah) Nil else 1 :: Nil
b: List[Int] >: Cons[Int] | Nil = List()
The human readable printing of inferred union types should use the simplest synonym in scope. With that, the above example is not a problem and users would see List[Int].
With extremely branchy ADTs it may be worse, but even a complicated union is better than AnyRef. I assume that best practices would evolve, and that the best practices for these cases would include providing synonyms for frequently encountered union combinations in an ADT space.
But you would not usually get AnyRef. To pick another example: abstract syntax trees. Class Tree is not sealed and has ~ 40 cases. I don't think you want to see a union of 40 cases instead of the simple word "Tree".
And what happens here:
def foo(seq: Seq[Double]) = ()
foo(List(1, 1.0))
would that still work?
scala> List[AnyVal](1,1.0)
res3: List[AnyVal] = List(1, 1.0)
scala> List[Double](1,1.0)
res4: List[Double] = List(1.0, 1.0)
Opinions, please!
--
scala> val a = 1
a: Int = 1
scala> def foo(d: Double) = d
foo: (d: Double)Double
scala> foo(1)
res0: Double = 1.0
scala> foo(a)
<console>:10: error: type mismatch;
found : Int
required: Double
foo(a)
^
scala> val bar: Double = 3
bar: Double = 3.0
scala> val bar: Double = a
<console>:8: error: type mismatch;
found : Int
required: Double
val bar: Double = a
^
scala> List(1,1.0)
<console>:8: error: type mismatch;
found : Double(1.0)
required: Int
List(1,1.0)
^
scala> List[AnyVal](1,1.0)
res3: List[AnyVal] = List(1, 1.0)
scala> List[Double](1,1.0)
res4: List[Double] = List(1.0, 1.0)
Side point perhaps, but that error message seems to be a bit confusing. What makes Int be required?I think it would be more clear if it said something along the lines that searching for a common supertype failed because AnyVal was rejected because XXX. Or more succinctly, could not find a common supertype other than AnyVal.
I brought the constant folding conversions in line with the new rules, so this is now the current status:
scala> val a = 1<console>:9: error: type mismatch;
a: Int = 1
scala> def foo(d: Double) = d
foo: (d: Double)Double
scala> foo(1)
found : Int(1)
required: Double
foo(1)
^
scala> List(1,1.0)
<console>:8: error: type mismatch;
found : Double(1.0)
required: Int
List(1,1.0)
^
^
I think except for the failure to infer AnyVal in List(1,1.0), I'm pretty happy with it.
That is the Double constant 1, or should be. No AnyVal in sight. No widening, just context.