end the blight of contrarivariance

1,571 views
Skip to first unread message

Paul Phillips

unread,
May 27, 2012, 7:13:07 PM5/27/12
to scala-l...@googlegroups.com, Kris Nuttycombe, Jeff Olson, Miles Sabin, Jason Zaugg
[CC list are probably all on scala-language, but included for interest.]

I don't know how to build a more open-and-shut case than this. (And I won't - whatever remains to be done is unlikely to be done by me.) Given this test case, also attached:


In trunk it prints:

If there are several eligible arguments which match the
implicit parameter’s type, a most specific one will be
chosen using the rules of static overloading resolution.
  -- SLS 7.2, "Implicit Parameters"

Static overloading selection: 1  2  3
    Implicit value selection: 1  1  1

In the branch indicated above (paulp, topic/contrarivariance)  it prints the same quote from the specification, followed by

Static overloading selection: 1  2  3
    Implicit value selection: 1  2  3

So as I see it, scala is not implementing its own specification and we should have fixed this bug years ago.  It goes back to at least October 2009, closed wontfix: https://issues.scala-lang.org/browse/SI-2509

As noted in my commit comment, once selection is done properly it's easy to make Ordering and friends contravariant.  Everything works the way you'd imagine.  It's so nice seeing the Iterable[T] implicit Ordering used for any subclass of Iterable without involving the sketchy hijinx seen in Ordering#ExtraImplicits.  But I didn't bundle any of those changes.

NOTE: the particulars of how I modified "isAsSpecific" are not anything I'm advocating for, I'm sure it's all totally wrong and etc.  My offering is:

 a) evidence that the specification already mandates that specificity is based on inheritance, not on the subtype lattice
 b) a complete working implementation, however flawed - all tests pass

contravariant-selection.scala

martin odersky

unread,
May 28, 2012, 5:33:25 AM5/28/12
to scala-l...@googlegroups.com
On Mon, May 28, 2012 at 1:13 AM, Paul Phillips <pa...@improving.org> wrote:
[CC list are probably all on scala-language, but included for interest.]

I don't know how to build a more open-and-shut case than this. (And I won't - whatever remains to be done is unlikely to be done by me.) Given this test case, also attached:


In trunk it prints:

If there are several eligible arguments which match the
implicit parameter’s type, a most specific one will be
chosen using the rules of static overloading resolution.
  -- SLS 7.2, "Implicit Parameters"

Static overloading selection: 1  2  3
    Implicit value selection: 1  1  1

In the branch indicated above (paulp, topic/contrarivariance)  it prints the same quote from the specification, followed by

Static overloading selection: 1  2  3
    Implicit value selection: 1  2  3

So as I see it, scala is not implementing its own specification and we should have fixed this bug years ago.  It goes back to at least October 2009, closed wontfix: https://issues.scala-lang.org/browse/SI-2509

No, there is a crucial difference: The f function 
is _applied_ to arguments of the three different types, yet the 
implicit values stand alone. We all know that variance reverses in argument position and that's the effect you are seeing. So your example demonstrates in effect that the spec and compiler are in agreement. I have an example showing the right correspondence with overloading at the end of this mail.

We have talked about this many times before. To make progress here you'd have to invent a completely new notion of specificity alongside the subtyping relation we have. Specificity (i.e. minimize all type parameters regardless of variance in the style of Eiffel) just feels wrong to me from a type-systematic point of view. It's also a big change, and it's a change that would make the language considerably more complicated. And then I am not even sure we won't run into a new set of borderline cases where specificity is not the right criterion and we want subtyping instead.

It would be cleaner to have variance annotations in implicit parameters. E.g. something like:

  def [T] foo(-implicit x: Ord[T])

to indicate that we want to maximize the Ord implicit (because all we do is apply to an argument later), instead of minimizing it. It would fix the problem in a clean way. Unfortunately it's a also more complexity in the users face. That's why the ticket was a won't fix.

Cheers

 - Martin


Here's an example that shows overloading resolution of the Ord values in the sense of the spec. I had to equip the different Ords with implicits themselves because otherwise we'd have run into a duplicate method error. As expected, it prints 1, i.e. the Ord[Iterable] is chosen over the others. So overloading and implicit resolution are in agreement.


trait A
trait B
trait C
trait Ord[-T]

object Test extends App {

  implicit val a = new A {}
  implicit val b = new B {}
  implicit val c = new C {}

  def Ord(implicit x: A): Ord[Iterable[Int]] = new Ord[Iterable[Int]] { override def toString = "1" }
  def Ord(implicit x: B): Ord[     Seq[Int]] = new Ord[     Seq[Int]] { override def toString = "2" }
  def Ord(implicit x: C): Ord[    List[Int]] = new Ord[    List[Int]] { override def toString = "3" }

  println(Ord)

}


 
As noted in my commit comment, once selection is done properly it's easy to make Ordering and friends contravariant.  Everything works the way you'd imagine.  It's so nice seeing te Iterable[T] implicit Ordering used for any subclass of Iterable without involving the sketchy hijinx seen in Ordering#ExtraImplicits.  But I didn't bundle any of those changes.

NOTE: the particulars of how I modified "isAsSpecific" are not anything I'm advocating for, I'm sure it's all totally wrong and etc.  My offering is:

 a) evidence that the specification already mandates that specificity is based on inheritance, not on the subtype lattice
 b) a complete working implementation, however flawed - all tests pass




--
Martin Odersky
Prof., EPFL and Chairman, Typesafe
PSED, 1015 Lausanne, Switzerland
Tel. EPFL: +41 21 693 6863
Tel. Typesafe: +41 21 691 4967

Paul Phillips

unread,
May 28, 2012, 12:59:21 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 2:33 AM, martin odersky <martin....@epfl.ch> wrote:
To make progress here you'd have to invent a completely new notion of specificity alongside the subtyping relation we have.

For posterity (I'm not arguing - I give up) this is the completely new notion of specificity which I have developed.

  Object A: It can sort vectors of acme brand frabazulators which have a disney emblem on the left front quadrant, and where either tom cruise or john travolta is hiding inside one of the frabazulators.  It cannot sort anything else.
  Object B: It can sort anything.

WHICH OBJECT IS MORE SPECIFIC?

  Scala: object B
  New notion of specificity: object A

The reference to eiffel makes me wonder if you are really taking the trouble to understand what it is people want here.  Eiffel is unsound, this isn't.

martin odersky

unread,
May 28, 2012, 2:06:18 PM5/28/12
to scala-l...@googlegroups.com
I never claimed it is unsound. You can use any relation you like for overloading resolution and implicit search without violating soundness. And maybe I am wrong in my assumption what relationship was proposed, because you did not define it. So I can only guess.

 - Martin

Paul Phillips

unread,
May 28, 2012, 2:14:49 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 11:06 AM, martin odersky <martin....@epfl.ch> wrote:
I never claimed it is unsound. You can use any relation you like for overloading resolution and implicit search without violating soundness. And maybe I am wrong in my assumption what relationship was proposed, because you did not define it. So I can only guess.

Anyone in the peanut gallery have their own guess as to what relationship is proposed? I am curious whether everything is really so opaque that only guesses can be hazarded.  I understand your response to mean "you did not phrase it in spec-ese" and since I know empirically that I cannot write spec-ese in a way which will satisfy you, it is a simple way for you to shut down the subject by fiat.  That's your prerogative, but the well does eventually run dry.

martin odersky

unread,
May 28, 2012, 2:34:02 PM5/28/12
to scala-l...@googlegroups.com
You could try. I do not demand "spec-ese", but I think it's fair to demand a definition that covers all cases instead of one specific example. 

 - Martin



Daniel Sobral

unread,
May 28, 2012, 4:13:44 PM5/28/12
to scala-l...@googlegroups.com
I find this issue confusing. Let's assume Ord[-T] and Seq[+T], A >: B
>: C. If I define f for Ord[A], Ord[B] and Ord[C] as well as g for
Seq[A], Seq[B] and Seq[C], calling f(x: Ord[B]) and g(x: Seq[B]) will
both return the B-variant. However, in the *absence* of f for Ord[B]
and g for Seq[B], they'll return Ord[C] and Seq[A], respectively.

That's quite different from implicitly[Ord[B]] and implicitly[Seq[B]]
-- they act in mirrored ways, the first returning Ord[A] and the
second Seq[C], even in the presence of an implicit directly matching
Ord[B].

If implicit resolution went the way of overload resolution, that would
change the current behavior of co-variant implicit resolution. I'm not
sure if that's being proposed or not, and I do fear it might break
code.

I'm not sure how I'd spec implicit resolution to get the effect
demonstrated. First select available implicits respecting variance,
and then... treat contra-variance as if it were co-variance? That's
confusing. I'd rather say that implicit resolution of contra-variant
types follow the rules of overload resolution, but *not* for
co-variant types. That looks simpler to me, but I find the asymmetry
is disquieting.

That's the core of my own doubts about this issue: do I want
asymmetric behavior on implicit resolution? Or, perhaps, using
overload resolution behavior all the way is the way to go (and deal
with the incompatibilities)? Or stay this way?

The problem with the latter option is that it introduces asymmetry
elsewhere: co-variant implicit resolution is useful, contra-variant
implicit resolution is not.

--
Daniel C. Sobral

I travel to the future all the time.

Paul Phillips

unread,
May 28, 2012, 4:31:32 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 1:13 PM, Daniel Sobral <dcso...@gmail.com> wrote:
That's the core of my own doubts about this issue: do I want
asymmetric behavior on implicit resolution?

Type inference already has plenty of asymmetry.  Ask yourself this: why does the invariant case act like the covariant case in the excerpt below? Why should invariance pick a side? Why that side? What is being admitted through the preference?

You can only subclass in one direction.  Perhaps an analogy is travel through the fourth dimension.  On paper you can flip all the arrows and the equations still work.  That doesn't mean we design systems to accommodate "traveling backward in time" and "traveling forward in time" with equal preference.


A maximal type T[i] will be chosen if the type parameter a[i] appears
contravariantly (§4.5) in the type T of the expression. A minimal type
T[i] will be chosen in all other situations, i.e. if the variable appears
covariantly, non-variantly or not at all in the type T. We call such a
substitution an optimal solution of the given constraint system for the
type T.

Paul Phillips

unread,
May 28, 2012, 4:36:49 PM5/28/12
to scala-l...@googlegroups.com
Also, you're making it more complicated than it is.  Rather than attempt to deconstruct your layers, can you just tell me what the mystery is.

trait A[+T]  // we preference A[String] over A[Any] if both are available
trait A[-T]   // currently: we preference A[Any] over A[String] if both are available
trait A[-T]   // changed: we preference A[String] over A[Any] if both are available

That's it.  We can talk about more complicated cases, but so far it doesn't seem like people understand this one.

Erik Osheim

unread,
May 28, 2012, 4:46:03 PM5/28/12
to scala-l...@googlegroups.com
Right.

I'd expect the type you ask for (e.g. Ord[Tiger]) to always be more
specific, rather than immediately generalizing to Ord[Any],
Ord[Animal], Ord[Cat], etc.

Is there an example where using "reversed specificity" for
contravariant implicit resolution (i.e. preferring Ord[Tiger] over
Ord[Cat]) would cause problems? Bonus points if the example works
currently and does something useful! :)

-- Erik

martin odersky

unread,
May 28, 2012, 4:56:35 PM5/28/12
to scala-l...@googlegroups.com
I don't know. But for the moment I would be interested to just see what we are defining here. Can someone give a complete definition of "reverse specificity"? The one definition I can think of is not pretty: repeat all rules that we have give for subtyping, but change the rule for type arguments:

T[X] more-specific-than T[Y]  

if X more-specific-than Y, irrespective of variance annotations. 

It works for the example but it repeats a lot of rules. And I do not see a semantic justification (in the sense that types are sets of values), so it does feel wrong to me.

Cheers

 - Martin

Paul Phillips

unread,
May 28, 2012, 4:58:53 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 1:46 PM, Erik Osheim <er...@plastic-idolatry.com> wrote:
Is there an example where using "reversed specificity" for
contravariant implicit resolution (i.e. preferring Ord[Tiger] over
Ord[Cat]) would cause problems? Bonus points if the example works
currently and does something useful! :)

An example of where you would want or depend upon the current behavior would certainly be interesting.  I have never seen one.  (One reason I've never seen one is that contravariance is used so rarely - of course, the subject under discussion is a large chunk of the reason contravariance is used so rarely.)

Paul Phillips

unread,
May 28, 2012, 5:01:05 PM5/28/12
to scala-l...@googlegroups.com


On Mon, May 28, 2012 at 1:56 PM, martin odersky <martin....@epfl.ch> wrote:
It works for the example but it repeats a lot of rules.

Don't cut and paste, refactor.

Tony Morris

unread,
May 28, 2012, 5:53:25 PM5/28/12
to scala-l...@googlegroups.com

FYI contravariance is not so rare. We see this issue all the time.

Paul Phillips

unread,
May 28, 2012, 5:58:57 PM5/28/12
to scala-l...@googlegroups.com
On Mon, May 28, 2012 at 2:53 PM, Tony Morris <tonym...@gmail.com> wrote:
>
> FYI contravariance is not so rare. We see this issue all the time.

I know, that's what my last sentence meant.  "The subject under
discussion is a large chunk of the reason contravariance is used so
rarely." Meaning, you use contravariance less than you would like
because of this.

Tony Morris

unread,
May 28, 2012, 6:42:13 PM5/28/12
to scala-l...@googlegroups.com

Ah right. Yes I can attest to this, in that I prefer to define a contramap method than declare contravariance with a - symbol. Too much nasty lurks there.

Jesper Nordenberg

unread,
May 29, 2012, 2:40:34 AM5/29/12
to scala-l...@googlegroups.com, Paul Phillips
Paul Phillips skrev 2012-05-28 22:36:
> Also, you're making it more complicated than it is. Rather than attempt
> to deconstruct your layers, can you just tell me what the mystery is.
>
> trait A[+T] // we preference A[String] over A[Any] if both are available
> trait A[-T] // currently: we preference A[Any] over A[String] if both
> are available
> trait A[-T] // changed: we preference A[String] over A[Any] if both
> are available

Given:

implicit val aa = new A[Any]
implicit val as = new A[String]
def foo[T](implicit a : A[T]) = a
foo[String] // == aa !!!

It just feels wrong that scalac choses A[Any] in this case, regardless
of variance annotation on A's type parameter. So +10 for this change.

/Jesper Nordenberg

Paul Phillips

unread,
May 29, 2012, 3:53:40 AM5/29/12
to scala-l...@googlegroups.com
In case working code has any bearing, I pushed contravariant Ordering, PartialOrdering, and Equiv.


// Hey, it works right out of the box.
scala> List(List("abc"), Seq("def"), Set("aaa")).sorted
res0: List[Iterable[String] with String with Int => Any] = List(Set(aaa), List(abc), List(def))

// What trunk does.  Shoot, where did I leave my
//   Ordering[Iterable[String] with String with Int => Any]
// it was just here a second ago...
scala> List(List("abc"), Seq("def"), Set("aaa")).sorted
<console>:8: error: No implicit Ordering defined for Iterable[String] with String with Int => Any.
              List(List("abc"), Seq("def"), Set("aaa")).sorted
                                                        ^

martin odersky

unread,
May 29, 2012, 6:48:23 AM5/29/12
to scala-l...@googlegroups.com
But what about this situation

implicit val aa: String => Object
implicit val bb: Object => String

Which one is more specififc? In Scala, it's bb. It provides a better type_and_ works for more arguments. In types-as-sets-of-values terms, there are way fewer functions of type Object => String than there are of type String => Object. So, clearly bb's type is more specific.

However, in the proposed new scheme, you'd get an ambiguity. Neither aa nor bb is more specific than the other.

Not only does this break code, it is also very unintuitive. 

 - Martin


Paul Phillips

unread,
May 29, 2012, 8:51:14 AM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 3:48 AM, martin odersky <martin....@epfl.ch> wrote:
implicit val aa: String => Object
implicit val bb: Object => String

Do you have any non-contrived examples? Over here we're looking at unarguably real situations like Ordering and Foldable being crippled to the point of uselessness.  When is it exactly that people count on implicit resolution choosing Object => String over String => Object, and why aren't the people counting on that strangled in their sleep by others on their projects?

Which one is more specififc? In Scala, it's bb. It provides a better type_and_ works for more arguments. In types-as-sets-of-values terms, there are way fewer functions of type Object => String than there are of type String => Object. So, clearly bb's type is more specific.

Clearly, as long as one is sure to think only in "types as sets of values".  If everyone did that all the time we'd all be glad Ordering[Any] was chosen over Ordering[MySpecificType] and we wouldn't still be talking about this years later.  I expect that most people understand specificity to correlate with *programmer-provided* specificity.  And you can still only subclass in one direction.

However, in the proposed new scheme, you'd get an ambiguity. Neither aa nor bb is more specific than the other.

I consider this a feature.  For a guy who put a bunch of features behind warning flags for being too dangerous, you're pretty cavalier about settling "String => Object" vs. "Object => String" via implicit resolution.
 
Not only does this break code, it is also very unintuitive. 

It's intuitive that in jesper's example, implicitly[A[String]] resolves to A[Any] rather than the implicit A[String] that was just defined?

I'd like to see the code it breaks.  Maybe it does somewhere - it wouldn't change whether it should be done - but I'd like to see who it is who is relying on "specificity" of this kind and how it is they are doing so.  A real-life usage doesn't seem like much to ask in light of the fairly ridiculous cumulative amount of effort I've now put into this, not to mention the efforts of numerous others.

Chris Marshall

unread,
May 29, 2012, 11:38:58 AM5/29/12
to scala-l...@googlegroups.com
For what it's worth, I agree with Paul; it's frustrating that scalaz has made Equal invariant in v7 to get around these issues.

Chris

Jesper Nordenberg

unread,
May 29, 2012, 12:30:40 PM5/29/12
to scala-l...@googlegroups.com, martin odersky
martin odersky skrev 2012-05-29 12:48:
> But what about this situation
>
> implicit val aa: String => Object
> implicit val bb: Object => String
>
> Which one is more specififc? In Scala, it's bb. It provides a better
> type_and_ works for more arguments. In types-as-sets-of-values terms,
> there are way fewer functions of type Object => String than there are of
> type String => Object. So, clearly bb's type is more specific.

What's the context for the implicit search? Or are you talking about
some absolute ordering of specificity (because I don't think there is one)?

/Jesper Nordenberg

Jason Zaugg

unread,
May 29, 2012, 12:33:33 PM5/29/12
to scala-l...@googlegroups.com
This search would consider both candidates:

implicitly[Object => Object]

-jason

Ryan Hendrickson

unread,
May 29, 2012, 12:41:21 PM5/29/12
to scala-l...@googlegroups.com
> >> But what about this situation
> >>
> >> implicit val aa: String => Object
> >> implicit val bb: Object => String
> >>
> >> Which one is more specififc? In Scala, it's bb. It provides a better
> >> type_and_ works for more arguments. In types-as-sets-of-values terms,
> >> there are way fewer functions of type Object => String than there are of
> >> type String => Object. So, clearly bb's type is more specific.
> >
> >
> > What's the context for the implicit search? Or are you talking about some
> > absolute ordering of specificity (because I don't think there is one)?
>
> This search would consider both candidates:
>
> implicitly[Object => Object]

Surely not? A (String => Object) is not an (Object => Object).





(please forgive me my corporate legal disclaimer)

----------------------------------------

This message is intended exclusively for the individual(s) or entity to
which it is addressed. It may contain information that is proprietary,
privileged or confidential or otherwise legally exempt from disclosure.
If you are not the named addressee, you are not authorized to read,
print, retain, copy or disseminate this message or any part of it.
If you have received this message in error, please notify the sender
immediately by e-mail and delete all copies of the message.

Jason Zaugg

unread,
May 29, 2012, 12:54:52 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 6:41 PM, Ryan Hendrickson
<Ryan.Hen...@bwater.com> wrote:
>> This search would consider both candidates:
>>
>>   implicitly[Object => Object]
>
> Surely not? A (String => Object) is not an (Object => Object).

Um, right. My mental contravariance switches appear to be faulty.

-jason

martin odersky

unread,
May 29, 2012, 1:01:52 PM5/29/12
to scala-l...@googlegroups.com
It could be a polymorphic context such as S => T, for type variables S and T.

Cheers

 - Martin
  

Jeff Olson

unread,
May 29, 2012, 1:27:28 PM5/29/12
to scala-l...@googlegroups.com
+10 for making this change (including make Ordering and friends contravariant). I've long wanted to see this fixed.

As to Martin's claim that this change would make the language more complex: I disagree, at least partially. A language with intuitive semantics is perceived as simpler regardless of the underlying implementation or specification complexity. The converse is also true: A language with unintuitive semantics appears to be complex and overlying complicated regardless of implementation. 

Assume for the moment that Ordering was contravariant (as God intended) and go ask 10 scala developers which is more specific:

1. Ordering[Any]
2. Ordering[String]

Then ask them the same for

1. String => Any
2. Any => String

I haven't actually tried this, but I'm willing to bet everyone with a functioning brain will tell you that an Ordering[String] is more specific, and that you'll get very mixed results for the second. Most people will scratch their heads and admit they don't know.

So is it okay that the compiler issues an ambiguity error when forced to choose between String => Any and Any => String? Absolutely; after all, it's ambiguous to the ordinary developer. Is it okay that the compiler chooses Ordering[Any] over Ordering[String]? Absolutely not. Not only is it unintuitive, it's almost certainly not the desired behavior.

I have yet to see a compelling real world example of why the current implicit search behavior with regards to contravariance is desirable. On the other hand, as Paul rightly points out, we have dozens of real world examples where new behavior is not only desirable but necessary.

So please, rather than appealing to some fuzzy notion of "feeling wrong" let's come up with some real examples, or get to work figuring out how to fix the spec.

-Jeff

Jeff Olson

unread,
May 29, 2012, 1:31:18 PM5/29/12
to scala-l...@googlegroups.com

Erik Osheim

unread,
May 29, 2012, 1:43:36 PM5/29/12
to scala-l...@googlegroups.com
On Mon, May 28, 2012 at 10:56:35PM +0200, martin odersky wrote:
> I don't know. But for the moment I would be interested to just see what we
> are defining here. Can someone give a complete definition of "reverse
> specificity"? The one definition I can think of is not pretty: repeat all
> rules that we have give for subtyping, but change the rule for type
> arguments:
>
> T[X] more-specific-than T[Y]
>
> if X more-specific-than Y, irrespective of variance annotations.

Unfortunately I don't think I can give a better definition. Maybe
Adriaan could chime in [1]?

Basing specificity(T[X], T[Y]) on specificity(X, Y) conforms to how
developers are thinking about (and trying to use) these sorts of
implicits right now. I think that this change would even make
covariance a bit nicer in some cases.

It seems like from the developers' point of view this change would be
all up side.

-- Erik

[1] https://github.com/adriaanm/scala-dev/wiki/Contravariance-and-Specificity

Jesper Nordenberg

unread,
May 29, 2012, 3:52:30 PM5/29/12
to scala-l...@googlegroups.com, martin odersky
martin odersky skrev 2012-05-29 19:01:
> It could be a polymorphic context such as S => T, for type variables S
> and T.

Please give a complete example because I can't make this implicit search
work unambiguously.

/Jesper Nordenberg

Paul Phillips

unread,
May 29, 2012, 3:55:00 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 10:27 AM, Jeff Olson <jeff.d...@gmail.com> wrote:
Then ask them the same for

1. String => Any
2. Any => String

And finally, ask them this one:

1. Any => Any
2. String => String

Presently ambiguous, becomes unambiguous.  If "unintuitive" is indeed a potential knock on the change, that's good news, because on that criterion we can't lose.  In any objective assessment of which is more intuitive, status quo would be crushed.

martin odersky

unread,
May 29, 2012, 4:45:26 PM5/29/12
to scala-l...@googlegroups.com
Here's a self-contained example. You have to exclude some predefined function values in Predef, which is achieved by the import.
 
import Predef.println

object Test extends App {

  def foo[S >: Null, T](implicit x: S => T) = x(null)

  implicit val f: String => Object = x => { println("String => Object"); x }
  implicit val g: Object => String = x => { println("Object => String"); ""+x }

  println(foo)

}

This will print: Object => String, so g is selected over f.

Cheers

 - Martin


John Nilsson

unread,
May 29, 2012, 4:54:57 PM5/29/12
to scala-l...@googlegroups.com
Isn't the behavior sought here exactly the same as for static overloading?

When resolving overloading a set of operations are compared wrt their
input type and the one with the most "specific" argument type is
selected. Being operations the type in question is in a contravariant
position no?

So this program should print two identical lines given that
overloading and implicit search should be the same:

def m(o:Object):String = "os"
def m(s:String):Object = "so"

implicit val os: Object => String = m
implicit val so: String => Object = m

def t1 = m(null:String)
def t2(implicit m: String => Object) = m(null:String)

println(t1)
println(t2)

BR,
John

Daniel Sobral

unread,
May 29, 2012, 5:00:02 PM5/29/12
to scala-l...@googlegroups.com
Well, yes, and you specified Null as the lower bound, so, presumably,
you could have worked with String => Object as well. This is similar
to the Nothing-inference problem, and would show up when you have
overly generic types _and_ more than one competing implicit.

Personally, I'd live happily with it.

--
Daniel C. Sobral

I travel to the future all the time.

√iktor Ҡlang

unread,
May 29, 2012, 5:17:30 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 11:00 PM, Daniel Sobral <dcso...@gmail.com> wrote:
On Tue, May 29, 2012 at 5:45 PM, martin odersky <martin....@epfl.ch> wrote:
>
>
> On Tue, May 29, 2012 at 9:52 PM, Jesper Nordenberg <mega...@yahoo.com>
> wrote:
>>
>> martin odersky skrev 2012-05-29 19:01:
>>
>>> It could be a polymorphic context such as S => T, for type variables S
>>> and T.
>>
>>
>> Please give a complete example because I can't make this implicit search
>> work unambiguously.
>>
> Here's a self-contained example. You have to exclude some predefined
> function values in Predef, which is achieved by the import.
>
> import Predef.println
>
> object Test extends App {
>
>   def foo[S >: Null, T](implicit x: S => T) = x(null)
>
>   implicit val f: String => Object = x => { println("String => Object"); x }
>   implicit val g: Object => String = x => { println("Object => String");
> ""+x }
>
>   println(foo)
>
> }
>
> This will print: Object => String, so g is selected over f.

Well, yes, and you specified Null as the lower bound, so, presumably,
you could have worked with String => Object as well. This is similar
to the Nothing-inference problem,

Ah, such fond memories are associated with this one. * mind wanders off *
 
and would show up when you have
overly generic types _and_ more than one competing implicit.

Personally, I'd live happily with it.

--
Daniel C. Sobral

I travel to the future all the time.



--
Viktor Klang

Akka Tech Lead
Typesafe - The software stack for applications that scale

Twitter: @viktorklang

Jesper Nordenberg

unread,
May 29, 2012, 5:32:02 PM5/29/12
to scala-l...@googlegroups.com, martin odersky
martin odersky skrev 2012-05-29 22:45:
> Here's a self-contained example. You have to exclude some predefined
> function values in Predef, which is achieved by the import.
> import Predef.println
>
> object Test extends App {
>
> def foo[S >: Null, T](implicit x: S => T) = x(null)
>
> implicit val f: String => Object = x => { println("String =>
> Object"); x }
> implicit val g: Object => String = x => { println("Object =>
> String"); ""+x }
>
> println(foo)
>
> }
>
> This will print: Object => String, so g is selected over f.

I fail to see why this behavior is desirable. S and T are basically
unbounded so why should either implicit be more specific than the other?
An ambiguity error seems natural here.

I think of specificity as inverse "type distance", i.e. given A <: B <:
C the distance between F[A] and F[C] is greater than the distance
between F[A] and F[B] regardless of the variance annotation on F's type
parameter. The variance annotation only specifies the subtype relation
between for example F[A] and F[B].

/Jesper Nordenberg

martin odersky

unread,
May 29, 2012, 5:47:47 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 11:32 PM, Jesper Nordenberg <mega...@yahoo.com> wrote:
martin odersky skrev 2012-05-29 22:45:

Here's a self-contained example. You have to exclude some predefined
function values in Predef, which is achieved by the import.
import Predef.println

object Test extends App {

  def foo[S >: Null, T](implicit x: S => T) = x(null)

  implicit val f: String => Object = x => { println("String =>
Object"); x }
  implicit val g: Object => String = x => { println("Object =>
String"); ""+x }

  println(foo)

}

This will print: Object => String, so g is selected over f.

I fail to see why this behavior is desirable. S and T are basically unbounded so why should either implicit be more specific than the other? 

Because there are fewer functions like f than functions like g.


I think of specificity as inverse "type distance", i.e. given A <: B <: C the distance between F[A] and F[C] is greater than the distance between F[A] and F[B] regardless of the variance annotation on F's type parameter. The variance annotation only specifies the subtype relation between for example F[A] and F[B].


I'd like to challenge you come up with a semantic characterization of what you intend to say. 

I see this question causes a lot of heat and strong feelings.
And I agree that better support for Ordered and friends is very desirable. But at the same time I am not willing to make what I still think are ad-hoc changes to the Scala typesystem to accommodate that. 

I think the whole thing calls for a SIP. No big need to motivate the change, but let's provide the precise rules and discuss how it fits into Scala. Ideally, I'd like to see another SIP that pursues the implicit variance idea that I had outlined in the first mail to this thread. Then we can discuss
which of the two (if any) should go in. 

One word regarding timing. I'm currently totally swamped with getting the 2.10 release out of the door and improving compiler speed by making it more incremental. These two issues have to take precedence. Once I have a little bit of air I am happy to jump back into the discussions.

Cheers

 - Martin


martin odersky

unread,
May 29, 2012, 5:49:18 PM5/29/12
to scala-l...@googlegroups.com
On Tue, May 29, 2012 at 11:47 PM, martin odersky <martin....@epfl.ch> wrote:


On Tue, May 29, 2012 at 11:32 PM, Jesper Nordenberg <mega...@yahoo.com> wrote:
martin odersky skrev 2012-05-29 22:45:

Here's a self-contained example. You have to exclude some predefined
function values in Predef, which is achieved by the import.
import Predef.println

object Test extends App {

  def foo[S >: Null, T](implicit x: S => T) = x(null)

  implicit val f: String => Object = x => { println("String =>
Object"); x }
  implicit val g: Object => String = x => { println("Object =>
String"); ""+x }

  println(foo)

}

This will print: Object => String, so g is selected over f.

I fail to see why this behavior is desirable. S and T are basically unbounded so why should either implicit be more specific than the other? 

Because there are fewer functions like f than functions like g.

Sorry, I meant to say: "fewer functions like g than functions like f".

 -- Martin 

MP

unread,
May 29, 2012, 6:24:23 PM5/29/12