end the blight of contrarivariance

1,562 views
Skip to first unread message

Paul Phillips

unread,
May 27, 2012, 7:13:07 PM5/27/12
to scala-l...@googlegroups.com, Kris Nuttycombe, Jeff Olson, Miles Sabin, Jason Zaugg
[CC list are probably all on scala-language, but included for interest.]

I don't know how to build a more open-and-shut case than this. (And I won't - whatever remains to be done is unlikely to be done by me.) Given this test case, also attached:


In trunk it prints:

If there are several eligible arguments which match the
implicit parameter’s type, a most specific one will be
chosen using the rules of static overloading resolution.
  -- SLS 7.2, "Implicit Parameters"

Static overloading selection: 1  2  3
    Implicit value selection: 1  1  1

In the branch indicated above (paulp, topic/contrarivariance)  it prints the same quote from the specification, followed by

Static overloading selection: 1  2  3
    Implicit value selection: 1  2  3

So as I see it, scala is not implementing its own specification and we should have fixed this bug years ago.  It goes back to at least October 2009, closed wontfix: https://issues.scala-lang.org/browse/SI-2509

As noted in my commit comment, once selection is done properly it's easy to make Ordering and friends contravariant.  Everything works the way you'd imagine.  It's so nice seeing the Iterable[T] implicit Ordering used for any subclass of Iterable without involving the sketchy hijinx seen in Ordering#ExtraImplicits.  But I didn't bundle any of those changes.

NOTE: the particulars of how I modified "isAsSpecific" are not anything I'm advocating for, I'm sure it's all totally wrong and etc.  My offering is:

 a) evidence that the specification already mandates that specificity is based on inheritance, not on the subtype lattice
 b) a complete working implementation, however flawed - all tests pass

contravariant-selection.scala

martin odersky

unread,
May 28, 2012, 5:33:25 AM5/28/12
to scala-l...@googlegroups.com
On Mon, May 28, 2012 at 1:13 AM, Paul Phillips <pa...@improving.org> wrote:
[CC list are probably all on scala-language, but included for interest.]

I don't know how to build a more open-and-shut case than this. (And I won't - whatever remains to be done is unlikely to be done by me.) Given this test case, also attached:


In trunk it prints:

If there are several eligible arguments which match the
implicit parameter’s type, a most specific one will be
chosen using the rules of static overloading resolution.
  -- SLS 7.2, "Implicit Parameters"

Static overloading selection: 1  2  3
    Implicit value selection: 1  1  1

In the branch indicated above (paulp, topic/contrarivariance)  it prints the same quote from the specification, followed by

Static overloading selection: 1  2  3
    Implicit value selection: 1  2  3

So as I see it, scala is not implementing its own specification and we should have fixed this bug years ago.  It goes back to at least October 2009, closed wontfix: https://issues.scala-lang.org/browse/SI-2509

No, there is a crucial difference: The f function 
is _applied_ to arguments of the three different types, yet the 
implicit values stand alone. We all know that variance reverses in argument position and that's the effect you are seeing. So your example demonstrates in effect that the spec and compiler are in agreement. I have an example showing the right correspondence with overloading at the end of this mail.

We have talked about this many times before. To make progress here you'd have to invent a completely new notion of specificity alongside the subtyping relation we have. Specificity (i.e. minimize all type parameters regardless of variance in the style of Eiffel) just feels wrong to me from a type-systematic point of view. It's also a big change, and it's a change that would make the language considerably more complicated. And then I am not even sure we won't run into a new set of borderline cases where specificity is not the right criterion and we want subtyping instead.

It would be cleaner to have variance annotations in implicit parameters. E.g. something like:

  def [T] foo(-implicit x: Ord[T])

to indicate that we want to maximize the Ord implicit (because all we do is apply to an argument later), instead of minimizing it. It would fix the problem in a clean way. Unfortunately it's a also more complexity in the users face. That's why the ticket was a won't fix.

Cheers

 - Martin


Here's an example that shows overloading resolution of the Ord values in the sense of the spec. I had to equip the different Ords with implicits themselves because otherwise we'd have run into a duplicate method error. As expected, it prints 1, i.e. the Ord[Iterable] is chosen over the others. So overloading and implicit resolution are in agreement.


trait A
trait B
trait C
trait Ord[-T]

object Test extends App {

  implicit val a = new A {}
  implicit val b = new B {}
  implicit val c = new C {}

  def Ord(implicit x: A): Ord[Iterable[Int]] = new Ord[Iterable[Int]] { override def toString = "1" }
  def Ord(implicit x: B): Ord[     Seq[Int]] = new Ord[     Seq[Int]] { override def toString = "2" }
  def Ord(implicit x: C): Ord[    List[Int]] = new Ord[    List[Int]] { override def toString = "3" }

  println(Ord)

}


 
As noted in my commit comment, once selection is done properly it's easy to make Ordering and friends contravariant.  Everything works the way you'd imagine.  It's so nice seeing te Iterable[T] implicit Ordering used for any subclass of Iterable without involving the sketchy hijinx seen in Ordering#ExtraImplicits.  But I didn't bundle any of those changes.

NOTE: the particulars of how I modified "isAsSpecific" are not anything I'm advocating for, I'm sure it's all totally wrong and etc.  My offering is:

 a) evidence that the specification already mandates that specificity is based on inheritance, not on the subtype lattice
 b) a complete working implementation, however flawed - all tests pass




--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

Paul Phillips

unread,
May 28, 2012, 12:59:21 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 2:33 AM, martin odersky <martin....@epfl.ch> wrote:
To make progress here you'd have to invent a completely new notion of specificity alongside the subtyping relation we have.

For posterity (I'm not arguing - I give up) this is the completely new notion of specificity which I have developed.

  Object A: It can sort vectors of acme brand frabazulators which have a disney emblem on the left front quadrant, and where either tom cruise or john travolta is hiding inside one of the frabazulators.  It cannot sort anything else.
  Object B: It can sort anything.

WHICH OBJECT IS MORE SPECIFIC?

  Scala: object B
  New notion of specificity: object A

The reference to eiffel makes me wonder if you are really taking the trouble to understand what it is people want here.  Eiffel is unsound, this isn't.

martin odersky

unread,
May 28, 2012, 2:06:18 PM5/28/12
to scala-l...@googlegroups.com
I never claimed it is unsound. You can use any relation you like for overloading resolution and implicit search without violating soundness. And maybe I am wrong in my assumption what relationship was proposed, because you did not define it. So I can only guess.

 - Martin

Paul Phillips

unread,
May 28, 2012, 2:14:49 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 11:06 AM, martin odersky <martin....@epfl.ch> wrote:
I never claimed it is unsound. You can use any relation you like for overloading resolution and implicit search without violating soundness. And maybe I am wrong in my assumption what relationship was proposed, because you did not define it. So I can only guess.

Anyone in the peanut gallery have their own guess as to what relationship is proposed? I am curious whether everything is really so opaque that only guesses can be hazarded.  I understand your response to mean "you did not phrase it in spec-ese" and since I know empirically that I cannot write spec-ese in a way which will satisfy you, it is a simple way for you to shut down the subject by fiat.  That's your prerogative, but the well does eventually run dry.

martin odersky

unread,
May 28, 2012, 2:34:02 PM5/28/12
to scala-l...@googlegroups.com
You could try. I do not demand "spec-ese", but I think it's fair to demand a definition that covers all cases instead of one specific example. 

 - Martin



Daniel Sobral

unread,
May 28, 2012, 4:13:44 PM5/28/12
to scala-l...@googlegroups.com
I find this issue confusing. Let's assume Ord[-T] and Seq[+T], A >: B
>: C. If I define f for Ord[A], Ord[B] and Ord[C] as well as g for
Seq[A], Seq[B] and Seq[C], calling f(x: Ord[B]) and g(x: Seq[B]) will
both return the B-variant. However, in the *absence* of f for Ord[B]
and g for Seq[B], they'll return Ord[C] and Seq[A], respectively.

That's quite different from implicitly[Ord[B]] and implicitly[Seq[B]]
-- they act in mirrored ways, the first returning Ord[A] and the
second Seq[C], even in the presence of an implicit directly matching
Ord[B].

If implicit resolution went the way of overload resolution, that would
change the current behavior of co-variant implicit resolution. I'm not
sure if that's being proposed or not, and I do fear it might break
code.

I'm not sure how I'd spec implicit resolution to get the effect
demonstrated. First select available implicits respecting variance,
and then... treat contra-variance as if it were co-variance? That's
confusing. I'd rather say that implicit resolution of contra-variant
types follow the rules of overload resolution, but *not* for
co-variant types. That looks simpler to me, but I find the asymmetry
is disquieting.

That's the core of my own doubts about this issue: do I want
asymmetric behavior on implicit resolution? Or, perhaps, using
overload resolution behavior all the way is the way to go (and deal
with the incompatibilities)? Or stay this way?

The problem with the latter option is that it introduces asymmetry
elsewhere: co-variant implicit resolution is useful, contra-variant
implicit resolution is not.

--
Daniel C. Sobral

I travel to the future all the time.

Paul Phillips

unread,
May 28, 2012, 4:31:32 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 1:13 PM, Daniel Sobral <dcso...@gmail.com> wrote:
That's the core of my own doubts about this issue: do I want
asymmetric behavior on implicit resolution?

Type inference already has plenty of asymmetry.  Ask yourself this: why does the invariant case act like the covariant case in the excerpt below? Why should invariance pick a side? Why that side? What is being admitted through the preference?

You can only subclass in one direction.  Perhaps an analogy is travel through the fourth dimension.  On paper you can flip all the arrows and the equations still work.  That doesn't mean we design systems to accommodate "traveling backward in time" and "traveling forward in time" with equal preference.


A maximal type T[i] will be chosen if the type parameter a[i] appears
contravariantly (§4.5) in the type T of the expression. A minimal type
T[i] will be chosen in all other situations, i.e. if the variable appears
covariantly, non-variantly or not at all in the type T. We call such a
substitution an optimal solution of the given constraint system for the
type T.

Paul Phillips

unread,
May 28, 2012, 4:36:49 PM5/28/12
to scala-l...@googlegroups.com
Also, you're making it more complicated than it is.  Rather than attempt to deconstruct your layers, can you just tell me what the mystery is.

trait A[+T]  // we preference A[String] over A[Any] if both are available
trait A[-T]   // currently: we preference A[Any] over A[String] if both are available
trait A[-T]   // changed: we preference A[String] over A[Any] if both are available

That's it.  We can talk about more complicated cases, but so far it doesn't seem like people understand this one.

Erik Osheim

unread,
May 28, 2012, 4:46:03 PM5/28/12
to scala-l...@googlegroups.com
Right.

I'd expect the type you ask for (e.g. Ord[Tiger]) to always be more
specific, rather than immediately generalizing to Ord[Any],
Ord[Animal], Ord[Cat], etc.

Is there an example where using "reversed specificity" for
contravariant implicit resolution (i.e. preferring Ord[Tiger] over
Ord[Cat]) would cause problems? Bonus points if the example works
currently and does something useful! :)

-- Erik

martin odersky

unread,
May 28, 2012, 4:56:35 PM5/28/12
to scala-l...@googlegroups.com
I don't know. But for the moment I would be interested to just see what we are defining here. Can someone give a complete definition of "reverse specificity"? The one definition I can think of is not pretty: repeat all rules that we have give for subtyping, but change the rule for type arguments:

T[X] more-specific-than T[Y]  

if X more-specific-than Y, irrespective of variance annotations. 

It works for the example but it repeats a lot of rules. And I do not see a semantic justification (in the sense that types are sets of values), so it does feel wrong to me.

Cheers

 - Martin

Paul Phillips

unread,
May 28, 2012, 4:58:53 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 1:46 PM, Erik Osheim <er...@plastic-idolatry.com> wrote:
Is there an example where using "reversed specificity" for
contravariant implicit resolution (i.e. preferring Ord[Tiger] over
Ord[Cat]) would cause problems? Bonus points if the example works
currently and does something useful! :)

An example of where you would want or depend upon the current behavior would certainly be interesting.  I have never seen one.  (One reason I've never seen one is that contravariance is used so rarely - of course, the subject under discussion is a large chunk of the reason contravariance is used so rarely.)

Paul Phillips

unread,
May 28, 2012, 5:01:05 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 1:56 PM, martin odersky <martin....@epfl.ch> wrote:
It works for the example but it repeats a lot of rules.

Don't cut and paste, refactor.

Tony Morris

unread,
May 28, 2012, 5:53:25 PM5/28/12
to scala-l...@googlegroups.com

FYI contravariance is not so rare. We see this issue all the time.

Paul Phillips

unread,
May 28, 2012, 5:58:57 PM5/28/12
to scala-l...@googlegroups.com
On Mon, May 28, 2012 at 2:53 PM, Tony Morris <tonym...@gmail.com> wrote:
>
> FYI contravariance is not so rare. We see this issue all the time.

I know, that's what my last sentence meant.  "The subject under
discussion is a large chunk of the reason contravariance is used so
rarely." Meaning, you use contravariance less than you would like
because of this.

Tony Morris

unread,
May 28, 2012, 6:42:13 PM5/28/12
to scala-l...@googlegroups.com

Ah right. Yes I can attest to this, in that I prefer to define a contramap method than declare contravariance with a - symbol. Too much nasty lurks there.

Jesper Nordenberg

unread,
May 29, 2012, 2:40:34 AM5/29/12
to scala-l...@googlegroups.com, Paul Phillips
Paul Phillips skrev 2012-05-28 22:36:
> Also, you're making it more complicated than it is. Rather than attempt
> to deconstruct your layers, can you just tell me what the mystery is.
>
> trait A[+T] // we preference A[String] over A[Any] if both are available
> trait A[-T] // currently: we preference A[Any] over A[String] if both
> are available
> trait A[-T] // changed: we preference A[String] over A[Any] if both
> are available

Given:

implicit val aa = new A[Any]
implicit val as = new A[String]
def foo[T](implicit a : A[T]) = a
foo[String] // == aa !!!

It just feels wrong that scalac choses A[Any] in this case, regardless
of variance annotation on A's type parameter. So +10 for this change.

/Jesper Nordenberg

Paul Phillips

unread,
May 29, 2012, 3:53:40 AM5/29/12
to scala-l...@googlegroups.com
In case working code has any bearing, I pushed contravariant Ordering, PartialOrdering, and Equiv.


// Hey, it works right out of the box.
scala> List(List("abc"), Seq("def"), Set("aaa")).sorted
res0: List[Iterable[String] with String with Int => Any] = List(Set(aaa), List(abc), List(def))

// What trunk does.  Shoot, where did I leave my
//   Ordering[Iterable[String] with String with Int => Any]
// it was just here a second ago...
scala> List(List("abc"), Seq("def"), Set("aaa")).sorted
<console>:8: error: No implicit Ordering defined for Iterable[String] with String with Int => Any.
              List(List("abc"), Seq("def"), Set("aaa")).sorted
                                                        ^

martin odersky

unread,
May 29, 2012, 6:48:23 AM5/29/12
to scala-l...@googlegroups.com
But what about this situation

implicit val aa: String => Object
implicit val bb: Object => String

Which one is more specififc? In Scala, it's bb. It provides a better type_and_ works for more arguments. In types-as-sets-of-values terms, there are way fewer functions of type Object => String than there are of type String => Object. So, clearly bb's type is more specific.

However, in the proposed new scheme, you'd get an ambiguity. Neither aa nor bb is more specific than the other.

Not only does this break code, it is also very unintuitive. 

 - Martin


Paul Phillips

unread,
May 29, 2012, 8:51:14 AM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 3:48 AM, martin odersky <martin....@epfl.ch> wrote:
implicit val aa: String => Object
implicit val bb: Object => String

Do you have any non-contrived examples? Over here we're looking at unarguably real situations like Ordering and Foldable being crippled to the point of uselessness.  When is it exactly that people count on implicit resolution choosing Object => String over String => Object, and why aren't the people counting on that strangled in their sleep by others on their projects?

Which one is more specififc? In Scala, it's bb. It provides a better type_and_ works for more arguments. In types-as-sets-of-values terms, there are way fewer functions of type Object => String than there are of type String => Object. So, clearly bb's type is more specific.

Clearly, as long as one is sure to think only in "types as sets of values".  If everyone did that all the time we'd all be glad Ordering[Any] was chosen over Ordering[MySpecificType] and we wouldn't still be talking about this years later.  I expect that most people understand specificity to correlate with *programmer-provided* specificity.  And you can still only subclass in one direction.

However, in the proposed new scheme, you'd get an ambiguity. Neither aa nor bb is more specific than the other.

I consider this a feature.  For a guy who put a bunch of features behind warning flags for being too dangerous, you're pretty cavalier about settling "String => Object" vs. "Object => String" via implicit resolution.
 
Not only does this break code, it is also very unintuitive. 

It's intuitive that in jesper's example, implicitly[A[String]] resolves to A[Any] rather than the implicit A[String] that was just defined?

I'd like to see the code it breaks.  Maybe it does somewhere - it wouldn't change whether it should be done - but I'd like to see who it is who is relying on "specificity" of this kind and how it is they are doing so.  A real-life usage doesn't seem like much to ask in light of the fairly ridiculous cumulative amount of effort I've now put into this, not to mention the efforts of numerous others.

Chris Marshall

unread,
May 29, 2012, 11:38:58 AM5/29/12
to scala-l...@googlegroups.com
For what it's worth, I agree with Paul; it's frustrating that scalaz has made Equal invariant in v7 to get around these issues.

Chris

Jesper Nordenberg

unread,
May 29, 2012, 12:30:40 PM5/29/12
to scala-l...@googlegroups.com, martin odersky
martin odersky skrev 2012-05-29 12:48:
> But what about this situation
>
> implicit val aa: String => Object
> implicit val bb: Object => String
>
> Which one is more specififc? In Scala, it's bb. It provides a better
> type_and_ works for more arguments. In types-as-sets-of-values terms,
> there are way fewer functions of type Object => String than there are of
> type String => Object. So, clearly bb's type is more specific.

What's the context for the implicit search? Or are you talking about
some absolute ordering of specificity (because I don't think there is one)?

/Jesper Nordenberg

Jason Zaugg

unread,
May 29, 2012, 12:33:33 PM5/29/12
to scala-l...@googlegroups.com
This search would consider both candidates:

implicitly[Object => Object]

-jason

Ryan Hendrickson

unread,
May 29, 2012, 12:41:21 PM5/29/12
to scala-l...@googlegroups.com
> >> But what about this situation
> >>
> >> implicit val aa: String => Object
> >> implicit val bb: Object => String
> >>
> >> Which one is more specififc? In Scala, it's bb. It provides a better
> >> type_and_ works for more arguments. In types-as-sets-of-values terms,
> >> there are way fewer functions of type Object => String than there are of
> >> type String => Object. So, clearly bb's type is more specific.
> >
> >
> > What's the context for the implicit search? Or are you talking about some
> > absolute ordering of specificity (because I don't think there is one)?
>
> This search would consider both candidates:
>
> implicitly[Object => Object]

Surely not? A (String => Object) is not an (Object => Object).





(please forgive me my corporate legal disclaimer)

----------------------------------------

This message is intended exclusively for the individual(s) or entity to
which it is addressed. It may contain information that is proprietary,
privileged or confidential or otherwise legally exempt from disclosure.
If you are not the named addressee, you are not authorized to read,
print, retain, copy or disseminate this message or any part of it.
If you have received this message in error, please notify the sender
immediately by e-mail and delete all copies of the message.

Jason Zaugg

unread,
May 29, 2012, 12:54:52 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 6:41 PM, Ryan Hendrickson
<Ryan.Hen...@bwater.com> wrote:
>> This search would consider both candidates:
>>
>>   implicitly[Object => Object]
>
> Surely not? A (String => Object) is not an (Object => Object).

Um, right. My mental contravariance switches appear to be faulty.

-jason

martin odersky

unread,
May 29, 2012, 1:01:52 PM5/29/12
to scala-l...@googlegroups.com
It could be a polymorphic context such as S => T, for type variables S and T.

Cheers

 - Martin
  

Jeff Olson

unread,
May 29, 2012, 1:27:28 PM5/29/12
to scala-l...@googlegroups.com
+10 for making this change (including make Ordering and friends contravariant). I've long wanted to see this fixed.

As to Martin's claim that this change would make the language more complex: I disagree, at least partially. A language with intuitive semantics is perceived as simpler regardless of the underlying implementation or specification complexity. The converse is also true: A language with unintuitive semantics appears to be complex and overlying complicated regardless of implementation. 

Assume for the moment that Ordering was contravariant (as God intended) and go ask 10 scala developers which is more specific:

1. Ordering[Any]
2. Ordering[String]

Then ask them the same for

1. String => Any
2. Any => String

I haven't actually tried this, but I'm willing to bet everyone with a functioning brain will tell you that an Ordering[String] is more specific, and that you'll get very mixed results for the second. Most people will scratch their heads and admit they don't know.

So is it okay that the compiler issues an ambiguity error when forced to choose between String => Any and Any => String? Absolutely; after all, it's ambiguous to the ordinary developer. Is it okay that the compiler chooses Ordering[Any] over Ordering[String]? Absolutely not. Not only is it unintuitive, it's almost certainly not the desired behavior.

I have yet to see a compelling real world example of why the current implicit search behavior with regards to contravariance is desirable. On the other hand, as Paul rightly points out, we have dozens of real world examples where new behavior is not only desirable but necessary.

So please, rather than appealing to some fuzzy notion of "feeling wrong" let's come up with some real examples, or get to work figuring out how to fix the spec.

-Jeff

Jeff Olson

unread,
May 29, 2012, 1:31:18 PM5/29/12
to scala-l...@googlegroups.com

Erik Osheim

unread,
May 29, 2012, 1:43:36 PM5/29/12
to scala-l...@googlegroups.com
On Mon, May 28, 2012 at 10:56:35PM +0200, martin odersky wrote:
> I don't know. But for the moment I would be interested to just see what we
> are defining here. Can someone give a complete definition of "reverse
> specificity"? The one definition I can think of is not pretty: repeat all
> rules that we have give for subtyping, but change the rule for type
> arguments:
>
> T[X] more-specific-than T[Y]
>
> if X more-specific-than Y, irrespective of variance annotations.

Unfortunately I don't think I can give a better definition. Maybe
Adriaan could chime in [1]?

Basing specificity(T[X], T[Y]) on specificity(X, Y) conforms to how
developers are thinking about (and trying to use) these sorts of
implicits right now. I think that this change would even make
covariance a bit nicer in some cases.

It seems like from the developers' point of view this change would be
all up side.

-- Erik

[1] https://github.com/adriaanm/scala-dev/wiki/Contravariance-and-Specificity

Jesper Nordenberg

unread,
May 29, 2012, 3:52:30 PM5/29/12
to scala-l...@googlegroups.com, martin odersky
martin odersky skrev 2012-05-29 19:01:
> It could be a polymorphic context such as S => T, for type variables S
> and T.

Please give a complete example because I can't make this implicit search
work unambiguously.

/Jesper Nordenberg

Paul Phillips

unread,
May 29, 2012, 3:55:00 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 10:27 AM, Jeff Olson <jeff.d...@gmail.com> wrote:
Then ask them the same for

1. String => Any
2. Any => String

And finally, ask them this one:

1. Any => Any
2. String => String

Presently ambiguous, becomes unambiguous.  If "unintuitive" is indeed a potential knock on the change, that's good news, because on that criterion we can't lose.  In any objective assessment of which is more intuitive, status quo would be crushed.

martin odersky

unread,
May 29, 2012, 4:45:26 PM5/29/12
to scala-l...@googlegroups.com
Here's a self-contained example. You have to exclude some predefined function values in Predef, which is achieved by the import.
 
import Predef.println

object Test extends App {

  def foo[S >: Null, T](implicit x: S => T) = x(null)

  implicit val f: String => Object = x => { println("String => Object"); x }
  implicit val g: Object => String = x => { println("Object => String"); ""+x }

  println(foo)

}

This will print: Object => String, so g is selected over f.

Cheers

 - Martin


John Nilsson

unread,
May 29, 2012, 4:54:57 PM5/29/12
to scala-l...@googlegroups.com
Isn't the behavior sought here exactly the same as for static overloading?

When resolving overloading a set of operations are compared wrt their
input type and the one with the most "specific" argument type is
selected. Being operations the type in question is in a contravariant
position no?

So this program should print two identical lines given that
overloading and implicit search should be the same:

def m(o:Object):String = "os"
def m(s:String):Object = "so"

implicit val os: Object => String = m
implicit val so: String => Object = m

def t1 = m(null:String)
def t2(implicit m: String => Object) = m(null:String)

println(t1)
println(t2)

BR,
John

Daniel Sobral

unread,
May 29, 2012, 5:00:02 PM5/29/12
to scala-l...@googlegroups.com
Well, yes, and you specified Null as the lower bound, so, presumably,
you could have worked with String => Object as well. This is similar
to the Nothing-inference problem, and would show up when you have
overly generic types _and_ more than one competing implicit.

Personally, I'd live happily with it.

--
Daniel C. Sobral

I travel to the future all the time.

√iktor Ҡlang

unread,
May 29, 2012, 5:17:30 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 11:00 PM, Daniel Sobral <dcso...@gmail.com> wrote:
On Tue, May 29, 2012 at 5:45 PM, martin odersky <martin....@epfl.ch> wrote:
>
>
> On Tue, May 29, 2012 at 9:52 PM, Jesper Nordenberg <mega...@yahoo.com>
> wrote:
>>
>> martin odersky skrev 2012-05-29 19:01:
>>
>>> It could be a polymorphic context such as S => T, for type variables S
>>> and T.
>>
>>
>> Please give a complete example because I can't make this implicit search
>> work unambiguously.
>>
> Here's a self-contained example. You have to exclude some predefined
> function values in Predef, which is achieved by the import.
>
> import Predef.println
>
> object Test extends App {
>
>   def foo[S >: Null, T](implicit x: S => T) = x(null)
>
>   implicit val f: String => Object = x => { println("String => Object"); x }
>   implicit val g: Object => String = x => { println("Object => String");
> ""+x }
>
>   println(foo)
>
> }
>
> This will print: Object => String, so g is selected over f.

Well, yes, and you specified Null as the lower bound, so, presumably,
you could have worked with String => Object as well. This is similar
to the Nothing-inference problem,

Ah, such fond memories are associated with this one. * mind wanders off *
 
and would show up when you have
overly generic types _and_ more than one competing implicit.

Personally, I'd live happily with it.

--
Daniel C. Sobral

I travel to the future all the time.



--
Viktor Klang

Akka Tech Lead
Typesafe - The software stack for applications that scale

Twitter: @viktorklang

Jesper Nordenberg

unread,
May 29, 2012, 5:32:02 PM5/29/12
to scala-l...@googlegroups.com, martin odersky
martin odersky skrev 2012-05-29 22:45:
> Here's a self-contained example. You have to exclude some predefined
> function values in Predef, which is achieved by the import.
> import Predef.println
>
> object Test extends App {
>
> def foo[S >: Null, T](implicit x: S => T) = x(null)
>
> implicit val f: String => Object = x => { println("String =>
> Object"); x }
> implicit val g: Object => String = x => { println("Object =>
> String"); ""+x }
>
> println(foo)
>
> }
>
> This will print: Object => String, so g is selected over f.

I fail to see why this behavior is desirable. S and T are basically
unbounded so why should either implicit be more specific than the other?
An ambiguity error seems natural here.

I think of specificity as inverse "type distance", i.e. given A <: B <:
C the distance between F[A] and F[C] is greater than the distance
between F[A] and F[B] regardless of the variance annotation on F's type
parameter. The variance annotation only specifies the subtype relation
between for example F[A] and F[B].

/Jesper Nordenberg

martin odersky

unread,
May 29, 2012, 5:47:47 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 11:32 PM, Jesper Nordenberg <mega...@yahoo.com> wrote:
martin odersky skrev 2012-05-29 22:45:

Here's a self-contained example. You have to exclude some predefined
function values in Predef, which is achieved by the import.
import Predef.println

object Test extends App {

  def foo[S >: Null, T](implicit x: S => T) = x(null)

  implicit val f: String => Object = x => { println("String =>
Object"); x }
  implicit val g: Object => String = x => { println("Object =>
String"); ""+x }

  println(foo)

}

This will print: Object => String, so g is selected over f.

I fail to see why this behavior is desirable. S and T are basically unbounded so why should either implicit be more specific than the other? 

Because there are fewer functions like f than functions like g.


I think of specificity as inverse "type distance", i.e. given A <: B <: C the distance between F[A] and F[C] is greater than the distance between F[A] and F[B] regardless of the variance annotation on F's type parameter. The variance annotation only specifies the subtype relation between for example F[A] and F[B].


I'd like to challenge you come up with a semantic characterization of what you intend to say. 

I see this question causes a lot of heat and strong feelings.
And I agree that better support for Ordered and friends is very desirable. But at the same time I am not willing to make what I still think are ad-hoc changes to the Scala typesystem to accommodate that. 

I think the whole thing calls for a SIP. No big need to motivate the change, but let's provide the precise rules and discuss how it fits into Scala. Ideally, I'd like to see another SIP that pursues the implicit variance idea that I had outlined in the first mail to this thread. Then we can discuss
which of the two (if any) should go in. 

One word regarding timing. I'm currently totally swamped with getting the 2.10 release out of the door and improving compiler speed by making it more incremental. These two issues have to take precedence. Once I have a little bit of air I am happy to jump back into the discussions.

Cheers

 - Martin


martin odersky

unread,
May 29, 2012, 5:49:18 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 11:47 PM, martin odersky <martin....@epfl.ch> wrote:


On Tue, May 29, 2012 at 11:32 PM, Jesper Nordenberg <mega...@yahoo.com> wrote:
martin odersky skrev 2012-05-29 22:45:

Here's a self-contained example. You have to exclude some predefined
function values in Predef, which is achieved by the import.
import Predef.println

object Test extends App {

  def foo[S >: Null, T](implicit x: S => T) = x(null)

  implicit val f: String => Object = x => { println("String =>
Object"); x }
  implicit val g: Object => String = x => { println("Object =>
String"); ""+x }

  println(foo)

}

This will print: Object => String, so g is selected over f.

I fail to see why this behavior is desirable. S and T are basically unbounded so why should either implicit be more specific than the other? 

Because there are fewer functions like f than functions like g.

Sorry, I meant to say: "fewer functions like g than functions like f".

 -- Martin 

MP

unread,
May 29, 2012, 6:24:23 PM5/29/12
to scala-l...@googlegroups.com

 Hi,

I'm trying to follow the discussion from an outside perspective and
your examples look very much compatible with the standard way
how one function belongs to the subtype of another (e.g. as in TAPL
\S 15.2). So, is the core of the argument whether or not Scala should
follow this convention? Are there other aspects which I am missing
here?

 Best,

  Markus

Rex Kerr

unread,
May 29, 2012, 6:32:42 PM5/29/12
to scala-l...@googlegroups.com
Markus (and anyone else wondering what this thread is about):

The core of the argument is about Scala's type inference / implicit selection mechanism.

In general, when it has multiple options it picks the most selective.  This is usually what you want.

The argument here is whether "most selective" and "deepest subtype" are the same thing with functions (and other contravariant entities).  That is, do you say
  String => String
is more or less selective than
  Object => String
?

With subtyping, Object => String is more selective since anywhere you can pass in String => String, an Object => String can do the job.  Hence, Object => String <: String => String.

However, one can also argue that String => String is more precisely defined, and so perhaps if you had to choose between the two you ought to pick String => String (i.e. the supertype).

That's what this discussion is about--do you or do you not follow the typing relationships?

Right now, Scala just looks at the typing relationship.

In practice, it seems that many people have identified cases where subtyping is _not_ what they want; instead, even though a type parameter is marked as contravariant, when considering which item to select that is compatible, one should proceed as if it were covariant (i.e. select the deepest subtype of the argument, not of the type-parameterized class).

Without trying it this way, it is less clear whether the other case--where one really does want subtyping relationships to govern selection--would be sorely missed if the behavior was changed.

  --Rex

Paul Phillips

unread,
May 29, 2012, 6:52:34 PM5/29/12
to scala-l...@googlegroups.com


On Tue, May 29, 2012 at 3:32 PM, Rex Kerr <ich...@gmail.com> wrote:
Without trying it this way, it is less clear whether the other case--where one really does want subtyping relationships to govern selection--would be sorely missed if the behavior was changed.


As far as I have observed, nobody will miss it because it's never what you want.  (I'd say "almost never" if I could think of any situation where I'd want it.) If anyone wants to look for counterexamples, there it is.

Message has been deleted

Paul Phillips

unread,
May 29, 2012, 8:45:38 PM5/29/12
to scala-l...@googlegroups.com
Isn't it traditional among mathematicians that one have reasons for things? Maybe I'm thinking of some other group. You could have written "I've got a bad feelin' in me bones" and we'd be no worse informed. The term for what you're doing is "fud", FYI.

On Tuesday, May 29, 2012, MP wrote:


 Hi Rex,

thanks for the clarification. My take on this is the
following (please take me seriously at your on risk):
myself, I'm a mathematician and I'm dabbling with
representing concepts and algorithms of representation
theory and category theory in Scala. For me, it is 
really great to see that nowadays well-typed (functional)
programming languages such as Scala (or Haskell, ...)
are available and actually usable for such purposes
from a very practical perspective (in particular wrt.
performance and availability of "practical" libraries,
in contrast to actually using a proof assistant system
or such). I'm pretty sure that I am not the only person
having this view on Scala and that Scala will be a
very interesting option in this sector. This is certainly
a far cry from the billion dollar generating query-some-
database business which most people probably have in 
mind when they think about programming on the JVM.
However, from where I'm standing, there seems to exist
a trend and a good motivation for programming languages
having a solid foundation in typed logic. From that
perspective, I completely understand Martin's refusal 
to accept a modification without communicating how it
behaves wrt. the theoretical underpinnings. You might 
end up with putting lipstick on some libraries, but in
the long run it will bite us.

 Just my 2c,

  Markus

Markus (and anyone else wondering what this thread is about):

The core of the argument is about Scala's type inference / implicit selection mechanism.

In general, when it has multiple options it picks the most selective.  This is usually what you want.

The argument here is whether "most selective" and "deepest subtype" are the same thing with functions (and other contravariant entities).  That is, do you say
  String => String
is more or less selective than
  Object => String
?

With subtyping, Object => String is more selective since anywhere you can pass in String => String, an Object => String can do the job.  Hence, Object => String <: String => String.

However, one can also argue that String => String is more precisely defined, and so perhaps if you had to choose between the two you ought to pick String => String (i.e. the supertype).

That's what this discussion is about--do you or do you not follow the typing relationships?

Right now, Scala just looks at the typing relationship.

In practice, it seems that many people have identified cases where subtyping is _not_ what they want; instead, even though a type parameter is marked as contravariant, when considering which item to select that is compatible, one should proceed as if it were covariant (i.e. select the deepest subtype of the argument, not of the type-parameterized class).

Without trying it this way, it is less clear whether the other case--where one really does want subtyping relationships to govern selection--would be sorely missed if the behavior was changed.

  --Rex

Ittay Dror

unread,
May 29, 2012, 11:58:30 PM5/29/12
to scala-language
FWIW, maybe implicit resolution should be more like virtual method
lookup. That is given:

class A {
def m = println("A")
}

class B extends A {
def m = println("B")
}

then:
(new B).m

would print: "B" (obviously...)

One can look at the choosing of which 'm' to invoke as choosing
between two implementations, of A=>Unit and B=>Unit. normally, A=>Unit
is more specific, but the B=>Unit is chosen. In other words, the
implicit argument 'this' is covariant.

So the notion of a covariant argument is not so absurd. Also, many
(most?) cases use implicits to encode type classes, which can be
thought of (at least in my mind) as a refignment of 'this' (e.g.,
sort[A:Ordering](list: List[A]) means the list of As can be ordered),
so it would be intuitive for implicit arguments to be covariant just
as 'this' is.

On May 28, 5:33 am, martin odersky <martin.oder...@epfl.ch> wrote:
> On Mon, May 28, 2012 at 1:13 AM, Paul Phillips <pa...@improving.org> wrote:
> > [CC list are probably all on scala-language, but included for interest.]
>
> > I don't know how to build a more open-and-shut case than this. (And I
> > won't - whatever remains to be done is unlikely to be done by me.) Given
> > this test case, also attached:
>
> >https://github.com/paulp/scala/blob/topic/contrarivariance/test/files...
>
> > In trunk it prints:
>
> > If there are several eligible arguments which match the
> > implicit parameter’s type, a most specific one will be
> > chosen using the rules of static overloading resolution.
> >   -- SLS 7.2, "Implicit Parameters"
>
> > Static overloading selection: 1  2  3
> >     Implicit value selection: 1  1  1
>
> > In the branch indicated above (paulp, topic/contrarivariance)  it prints
> > the same quote from the specification, followed by
>
> > Static overloading selection: 1  2  3
> >     Implicit value selection: 1  2  3
>
> > So as I see it, scala is not implementing its own specification and we
> > should have fixed this bug years ago.  It goes back to at least October
> > 2009, closed wontfix:https://issues.scala-lang.org/browse/SI-2509
>
> > No, there is a crucial difference: The f function
>
> is _applied_ to arguments of the three different types, yet the
> implicit values stand alone. We all know that variance reverses in argument
> position and that's the effect you are seeing. So your example demonstrates
> in effect that the spec and compiler are in agreement. I have an example
> showing the right correspondence with overloading at the end of this mail.
>
> We have talked about this many times before. To make progress here you'd
> have to invent a completely new notion of specificity alongside the
> subtyping relation we have. Specificity (i.e. minimize all type parameters
> regardless of variance in the style of Eiffel) just feels wrong to me from
> a type-systematic point of view. It's also a big change, and it's a change
> that would make the language considerably more complicated. And then I am
> not even sure we won't run into a new set of borderline cases where
> specificity is not the right criterion and we want subtyping instead.
>
> It would be cleaner to have variance annotations in implicit parameters.
> E.g. something like:
>
>   def [T] foo(-implicit x: Ord[T])
>
> to indicate that we want to maximize the Ord implicit (because all we do is
> apply to an argument later), instead of minimizing it. It would fix the
> problem in a clean way. Unfortunately it's a also more complexity in the
> users face. That's why the ticket was a won't fix.
>
> Cheers
>
>  - Martin
>
> Here's an example that shows overloading resolution of the Ord values in
> the sense of the spec. I had to equip the different Ords with implicits
> themselves because otherwise we'd have run into a duplicate method error.
> As expected, it prints 1, i.e. the Ord[Iterable] is chosen over the others.
> So overloading and implicit resolution are in agreement.
>
> trait A
> trait B
> trait C
> trait Ord[-T]
>
> object Test extends App {
>
>   implicit val a = new A {}
>   implicit val b = new B {}
>   implicit val c = new C {}
>
>   def Ord(implicit x: A): Ord[Iterable[Int]] = new Ord[Iterable[Int]] {
> override def toString = "1" }
>   def Ord(implicit x: B): Ord[     Seq[Int]] = new Ord[     Seq[Int]] {
> override def toString = "2" }
>   def Ord(implicit x: C): Ord[    List[Int]] = new Ord[    List[Int]] {
> override def toString = "3" }
>
>   println(Ord)
>
>
>
>
>
>
>
>
>
> }
> > As noted in my commit comment, once selection is done properly it's easy
> > to make Ordering and friends contravariant.  Everything works the way you'd
> > imagine.  It's so nice seeing te Iterable[T] implicit Ordering used for any
> > subclass of Iterable without involving the sketchy hijinx seen in
> > Ordering#ExtraImplicits.  But I didn't bundle any of those changes.
>
> > NOTE: the particulars of how I modified "isAsSpecific" are not anything
> > I'm advocating for, I'm sure it's all totally wrong and etc.  My offering
> > is:
>
> >  a) evidence that the specification already mandates that specificity is
> > based on inheritance, not on the subtype lattice
> >  b) a complete working implementation, however flawed - all tests pass
>
> --
> Martin Odersky
> Prof., EPFL <http://www.epfl.ch> and Chairman, Typesafe<http://www.typesafe.com>
> PSED, 1015 Lausanne, Switzerland
> Tel. EPFL: +41 21 693 6863
> Tel. Typesafe: +41 21 691 4967

Paul Phillips

unread,
May 30, 2012, 12:36:21 AM5/30/12
to scala-l...@googlegroups.com


On Tue, May 29, 2012 at 8:58 PM, Ittay Dror <ittay...@gmail.com> wrote:
FWIW, maybe implicit resolution should be more like virtual method
lookup.

That's the angle I attempted to take a year or so ago here.



[excerpt]

I attempt to analogize implicit search for a class with a contravariant parameter to method dispatch, as follows.

A method overload:

    def f(x: Any, y: Any): Int
    def f(x: Dog, y: Dog): Int

We know which of those methods is chosen whenever possible.  When one is 
looking for an implicit value with a contravariant type parameter, the 
only way to use the value with respect to the contravariant type is by 
calling methods which accept that type as a parameter:

    trait Ordering[-T] { def cmp(x: T, y: T): Int }

If we view the generic version of "cmp" as an overload across all 
possible Ts, then for any concrete T, the most specific possible method 
(and I mean the SLS definition of "most specific") is

    def cmp(x: T, y: T): Int

And the least is

    def cmp(x: Any, y: Any): Int

Now if we choose to view implicit resolution as static overloading 
resolution taking place with an extra layer of indirection, then the 
choice between Ordering[Any] and Ordering[Dog] is the same as the choice 
between

    def cmp(x: Dog, y: Dog): Int
    def cmp(x: Any, y: Any): Int

John Nilsson

unread,
May 30, 2012, 3:10:25 AM5/30/12
to scala-l...@googlegroups.com

Which again establishes that implicit lookup and overload resolution can, and should, be the same.

It seems to me that Martin also agrees with this.

So instead of focusing on motivating the implicit case maybe an easier question is why overloading works the way it does?

Btw. From a mathematical point of view it seems that overload resolution is just an application of implicit search with syntactic sugar on top :)

BR
John

Message has been deleted

Daniel Sobral

unread,
May 30, 2012, 8:42:11 AM5/30/12
to scala-l...@googlegroups.com
On Wed, May 30, 2012 at 4:10 AM, John Nilsson <jo...@milsson.nu> wrote:
> Which again establishes that implicit lookup and overload resolution can,
> and should, be the same.

Overloads are not parameterized, much less variant.

With overload, you have B <: A. The problem here is when B <: A but
T[A] <: T[B]. And, mind you, there's even the case where T[A] and T[B]
are not subtypes of each other.

Regardless, Martin says SIP. Two SIPs: one for general rule change,
and the other for declaration-site marking. I say writing that is the
most profitable use of time to those interested in this issue at this
point.

Paul Phillips

unread,
May 30, 2012, 11:05:26 AM5/30/12
to scala-l...@googlegroups.com

On Wed, May 30, 2012 at 5:42 AM, Daniel Sobral <dcso...@gmail.com> wrote:
Overloads are not parameterized, much less variant.

With overload, you have B <: A. The problem here is when B <: A but
T[A] <: T[B]. And, mind you, there's even the case where T[A] and T[B]
are not subtypes of each other.

Did you not look at any of the code in this thread? Do you think overloading behavior is irrelevant to this question? The following makes it as clear as I know how to make it.  Here is the output of this code:

123412342244
123412342244
214321432X4X
214321432X4X
12341234X2X4
12341234X2X4
214321432244
214321432244

Please linger on this result.  Except where types do not conform, this produces *exactly the same* output for contravariant Ord and covariant Bag.  This is the definition of specificity to which scala *already* adheres.

trait Ord1[-T]
trait Ord2[-T] extends Ord1[T]
trait Bag1[+T]
trait Bag2[+T] extends Bag1[T]

object Test {
  def f(x: Ord1[AnyRef]): Ord1[AnyRef] = { print("1") ; null }
  def f(x: Ord1[String]): Ord2[AnyRef] = { print("2") ; null }
  def f(x: Ord2[AnyRef]): Ord1[AnyRef] = { print("3") ; null }
  def f(x: Ord2[String]): Ord2[AnyRef] = { print("4") ; null }
  
  def f2(x: Ord1[AnyRef]): Ord1[String] = { print("1") ; null }
  def f2(x: Ord1[String]): Ord2[String] = { print("2") ; null }
  def f2(x: Ord2[AnyRef]): Ord1[String] = { print("3") ; null }
  def f2(x: Ord2[String]): Ord2[String] = { print("4") ; null }
  
  def f3(x: Ord1[String]): Ord1[AnyRef] = { print("1") ; null }
  def f3(x: Ord1[AnyRef]): Ord2[AnyRef] = { print("2") ; null }
  def f3(x: Ord2[String]): Ord1[AnyRef] = { print("3") ; null }
  def f3(x: Ord2[AnyRef]): Ord2[AnyRef] = { print("4") ; null }
  
  def f4(x: Ord1[String]): Ord1[String] = { print("1") ; null }
  def f4(x: Ord1[AnyRef]): Ord2[String] = { print("2") ; null }
  def f4(x: Ord2[String]): Ord1[String] = { print("3") ; null }
  def f4(x: Ord2[AnyRef]): Ord2[String] = { print("4") ; null }
  
  def g(x: Bag1[AnyRef]): Bag1[AnyRef] = { print("1") ; null }
  def g(x: Bag1[String]): Bag2[AnyRef] = { print("2") ; null }
  def g(x: Bag2[AnyRef]): Bag1[AnyRef] = { print("3") ; null }
  def g(x: Bag2[String]): Bag2[AnyRef] = { print("4") ; null }
  
  def g2(x: Bag1[AnyRef]): Bag1[String] = { print("1") ; null }
  def g2(x: Bag1[String]): Bag2[String] = { print("2") ; null }
  def g2(x: Bag2[AnyRef]): Bag1[String] = { print("3") ; null }
  def g2(x: Bag2[String]): Bag2[String] = { print("4") ; null }
  
  def g3(x: Bag1[String]): Bag1[AnyRef] = { print("1") ; null }
  def g3(x: Bag1[AnyRef]): Bag2[AnyRef] = { print("2") ; null }
  def g3(x: Bag2[String]): Bag1[AnyRef] = { print("3") ; null }
  def g3(x: Bag2[AnyRef]): Bag2[AnyRef] = { print("4") ; null }
  
  def g4(x: Bag1[String]): Bag1[String] = { print("1") ; null }
  def g4(x: Bag1[AnyRef]): Bag2[String] = { print("2") ; null }
  def g4(x: Bag2[String]): Bag1[String] = { print("3") ; null }
  def g4(x: Bag2[AnyRef]): Bag2[String] = { print("4") ; null }

  def main(args: Array[String]): Unit = {
    f(null: Ord1[AnyRef])
    f(null: Ord1[String])
    f(null: Ord2[AnyRef])
    f(null: Ord2[String])
    (f(null: Ord1[AnyRef]): Ord1[_])
    (f(null: Ord1[String]): Ord1[_])
    (f(null: Ord2[AnyRef]): Ord1[_])
    (f(null: Ord2[String]): Ord1[_])
    (f(null: Ord1[AnyRef]): Ord2[_])
    (f(null: Ord1[String]): Ord2[_])
    (f(null: Ord2[AnyRef]): Ord2[_])
    (f(null: Ord2[String]): Ord2[_])
    println("")
    
    f2(null: Ord1[AnyRef])
    f2(null: Ord1[String])
    f2(null: Ord2[AnyRef])
    f2(null: Ord2[String])
    (f2(null: Ord1[AnyRef]): Ord1[_])
    (f2(null: Ord1[String]): Ord1[_])
    (f2(null: Ord2[AnyRef]): Ord1[_])
    (f2(null: Ord2[String]): Ord1[_])
    (f2(null: Ord1[AnyRef]): Ord2[_])
    (f2(null: Ord1[String]): Ord2[_])
    (f2(null: Ord2[AnyRef]): Ord2[_])
    (f2(null: Ord2[String]): Ord2[_])
    println("")
    
    f3(null: Ord1[AnyRef])
    f3(null: Ord1[String])
    f3(null: Ord2[AnyRef])
    f3(null: Ord2[String])
    (f3(null: Ord1[AnyRef]): Ord1[_])
    (f3(null: Ord1[String]): Ord1[_])
    (f3(null: Ord2[AnyRef]): Ord1[_])
    (f3(null: Ord2[String]): Ord1[_])
    (f3(null: Ord1[AnyRef]): Ord2[_])
    print("X") // (f3(null: Ord1[String]): Ord2[_])
    (f3(null: Ord2[AnyRef]): Ord2[_])
    print("X") // (f3(null: Ord2[String]): Ord2[_])
    println("")
    
    f4(null: Ord1[AnyRef])
    f4(null: Ord1[String])
    f4(null: Ord2[AnyRef])
    f4(null: Ord2[String])
    (f4(null: Ord1[AnyRef]): Ord1[_])
    (f4(null: Ord1[String]): Ord1[_])
    (f4(null: Ord2[AnyRef]): Ord1[_])
    (f4(null: Ord2[String]): Ord1[_])
    (f4(null: Ord1[AnyRef]): Ord2[_])
    print("X") // (f4(null: Ord1[String]): Ord2[_])
    (f4(null: Ord2[AnyRef]): Ord2[_])
    print("X") // (f4(null: Ord2[String]): Ord2[_])
    println("")
    
    g(null: Bag1[AnyRef])
    g(null: Bag1[String])
    g(null: Bag2[AnyRef])
    g(null: Bag2[String])
    (g(null: Bag1[AnyRef]): Bag1[_])
    (g(null: Bag1[String]): Bag1[_])
    (g(null: Bag2[AnyRef]): Bag1[_])
    (g(null: Bag2[String]): Bag1[_])
    print("X") // (g(null: Bag1[AnyRef]): Bag2[_])
    (g(null: Bag1[String]): Bag2[_])
    print("X") // (g(null: Bag2[AnyRef]): Bag2[_])
    (g(null: Bag2[String]): Bag2[_])
    println("")

    g2(null: Bag1[AnyRef])
    g2(null: Bag1[String])
    g2(null: Bag2[AnyRef])
    g2(null: Bag2[String])
    (g2(null: Bag1[AnyRef]): Bag1[_])
    (g2(null: Bag1[String]): Bag1[_])
    (g2(null: Bag2[AnyRef]): Bag1[_])
    (g2(null: Bag2[String]): Bag1[_])
    print("X") // (g2(null: Bag1[AnyRef]): Bag2[_])
    (g2(null: Bag1[String]): Bag2[_])
    print("X") // (g2(null: Bag2[AnyRef]): Bag2[_])
    (g2(null: Bag2[String]): Bag2[_])
    println("")
    
    g3(null: Bag1[AnyRef])
    g3(null: Bag1[String])
    g3(null: Bag2[AnyRef])
    g3(null: Bag2[String])
    (g3(null: Bag1[AnyRef]): Bag1[_])
    (g3(null: Bag1[String]): Bag1[_])
    (g3(null: Bag2[AnyRef]): Bag1[_])
    (g3(null: Bag2[String]): Bag1[_])
    (g3(null: Bag1[AnyRef]): Bag2[_])
    (g3(null: Bag1[String]): Bag2[_])
    (g3(null: Bag2[AnyRef]): Bag2[_])
    (g3(null: Bag2[String]): Bag2[_])
    println("")

    g4(null: Bag1[AnyRef])
    g4(null: Bag1[String])
    g4(null: Bag2[AnyRef])
    g4(null: Bag2[String])
    (g4(null: Bag1[AnyRef]): Bag1[_])
    (g4(null: Bag1[String]): Bag1[_])
    (g4(null: Bag2[AnyRef]): Bag1[_])
    (g4(null: Bag2[String]): Bag1[_])
    (g4(null: Bag1[AnyRef]): Bag2[_])
    (g4(null: Bag1[String]): Bag2[_])
    (g4(null: Bag2[AnyRef]): Bag2[_])
    (g4(null: Bag2[String]): Bag2[_])
    println("")
  }
}

Regardless, Martin says SIP. Two SIPs: one for general rule change,
and the other for declaration-site marking. I say writing that is the
most profitable use of time to those interested in this issue at this
point.

In case anyone is wondering, I don't agree that the most profitable use of my time is to write something which I fully expect to go nowhere.  If anyone else thinks it's the most profitable use of their time, you have my support.

Daniel Sobral

unread,
May 30, 2012, 12:47:15 PM5/30/12
to scala-l...@googlegroups.com
On Wed, May 30, 2012 at 12:05 PM, Paul Phillips <pa...@improving.org> wrote:
>
> On Wed, May 30, 2012 at 5:42 AM, Daniel Sobral <dcso...@gmail.com> wrote:
>>
>> Overloads are not parameterized, much less variant.
>>
>> With overload, you have B <: A. The problem here is when B <: A but
>> T[A] <: T[B]. And, mind you, there's even the case where T[A] and T[B]
>> are not subtypes of each other.
>
>
> Did you not look at any of the code in this thread? Do you think overloading
> behavior is irrelevant to this question? The following makes it as clear as
> I know how to make it.  Here is the output of this code:

I just confused overload with override.

> In case anyone is wondering, I don't agree that the most profitable use of
> my time is to write something which I fully expect to go nowhere.  If anyone
> else thinks it's the most profitable use of their time, you have my support.

I fail to see how it could be _less_ profitable than writing all that
code in answer to an e-mail that is not even disagreeing with your
proposal, but to each its own.
Reply all
Reply to author
Forward
0 new messages