[CC list are probably all on scala-language, but included for interest.]I don't know how to build a more open-and-shut case than this. (And I won't - whatever remains to be done is unlikely to be done by me.) Given this test case, also attached:In trunk it prints:If there are several eligible arguments which match theimplicit parameter’s type, a most specific one will bechosen using the rules of static overloading resolution.-- SLS 7.2, "Implicit Parameters"Static overloading selection: 1 2 3Implicit value selection: 1 1 1In the branch indicated above (paulp, topic/contrarivariance) it prints the same quote from the specification, followed byStatic overloading selection: 1 2 3Implicit value selection: 1 2 3So as I see it, scala is not implementing its own specification and we should have fixed this bug years ago. It goes back to at least October 2009, closed wontfix: https://issues.scala-lang.org/browse/SI-2509
As noted in my commit comment, once selection is done properly it's easy to make Ordering and friends contravariant. Everything works the way you'd imagine. It's so nice seeing te Iterable[T] implicit Ordering used for any subclass of Iterable without involving the sketchy hijinx seen in Ordering#ExtraImplicits. But I didn't bundle any of those changes.
NOTE: the particulars of how I modified "isAsSpecific" are not anything I'm advocating for, I'm sure it's all totally wrong and etc. My offering is:a) evidence that the specification already mandates that specificity is based on inheritance, not on the subtype latticeb) a complete working implementation, however flawed - all tests pass
To make progress here you'd have to invent a completely new notion of specificity alongside the subtyping relation we have.
I never claimed it is unsound. You can use any relation you like for overloading resolution and implicit search without violating soundness. And maybe I am wrong in my assumption what relationship was proposed, because you did not define it. So I can only guess.
That's the core of my own doubts about this issue: do I want
asymmetric behavior on implicit resolution?
Is there an example where using "reversed specificity" for
contravariant implicit resolution (i.e. preferring Ord[Tiger] over
Ord[Cat]) would cause problems? Bonus points if the example works
currently and does something useful! :)
It works for the example but it repeats a lot of rules.
FYI contravariance is not so rare. We see this issue all the time.
Ah right. Yes I can attest to this, in that I prefer to define a contramap method than declare contravariance with a - symbol. Too much nasty lurks there.
implicit val aa: String => Objectimplicit val bb: Object => String
Which one is more specififc? In Scala, it's bb. It provides a better type_and_ works for more arguments. In types-as-sets-of-values terms, there are way fewer functions of type Object => String than there are of type String => Object. So, clearly bb's type is more specific.
However, in the proposed new scheme, you'd get an ambiguity. Neither aa nor bb is more specific than the other.
Not only does this break code, it is also very unintuitive.
Then ask them the same for1. String => Any2. Any => String
Well, yes, and you specified Null as the lower bound, so, presumably,On Tue, May 29, 2012 at 5:45 PM, martin odersky <martin....@epfl.ch> wrote:
>
>
> On Tue, May 29, 2012 at 9:52 PM, Jesper Nordenberg <mega...@yahoo.com>
> wrote:
>>
>> martin odersky skrev 2012-05-29 19:01:
>>
>>> It could be a polymorphic context such as S => T, for type variables S
>>> and T.
>>
>>
>> Please give a complete example because I can't make this implicit search
>> work unambiguously.
>>
> Here's a self-contained example. You have to exclude some predefined
> function values in Predef, which is achieved by the import.
>
> import Predef.println
>
> object Test extends App {
>
> def foo[S >: Null, T](implicit x: S => T) = x(null)
>
> implicit val f: String => Object = x => { println("String => Object"); x }
> implicit val g: Object => String = x => { println("Object => String");
> ""+x }
>
> println(foo)
>
> }
>
> This will print: Object => String, so g is selected over f.
you could have worked with String => Object as well. This is similar
to the Nothing-inference problem,
and would show up when you have
overly generic types _and_ more than one competing implicit.
Personally, I'd live happily with it.
--
Daniel C. Sobral
I travel to the future all the time.
martin odersky skrev 2012-05-29 22:45:I fail to see why this behavior is desirable. S and T are basically unbounded so why should either implicit be more specific than the other?
Here's a self-contained example. You have to exclude some predefined
function values in Predef, which is achieved by the import.
import Predef.println
object Test extends App {
def foo[S >: Null, T](implicit x: S => T) = x(null)
implicit val f: String => Object = x => { println("String =>
Object"); x }
implicit val g: Object => String = x => { println("Object =>
String"); ""+x }
println(foo)
}
This will print: Object => String, so g is selected over f.
I think of specificity as inverse "type distance", i.e. given A <: B <: C the distance between F[A] and F[C] is greater than the distance between F[A] and F[B] regardless of the variance annotation on F's type parameter. The variance annotation only specifies the subtype relation between for example F[A] and F[B].
On Tue, May 29, 2012 at 11:32 PM, Jesper Nordenberg <mega...@yahoo.com> wrote:martin odersky skrev 2012-05-29 22:45:I fail to see why this behavior is desirable. S and T are basically unbounded so why should either implicit be more specific than the other?
Here's a self-contained example. You have to exclude some predefined
function values in Predef, which is achieved by the import.
import Predef.println
object Test extends App {
def foo[S >: Null, T](implicit x: S => T) = x(null)
implicit val f: String => Object = x => { println("String =>
Object"); x }
implicit val g: Object => String = x => { println("Object =>
String"); ""+x }
println(foo)
}
This will print: Object => String, so g is selected over f.
Because there are fewer functions like f than functions like g.
Without trying it this way, it is less clear whether the other case--where one really does want subtyping relationships to govern selection--would be sorely missed if the behavior was changed.
Hi Rex,thanks for the clarification. My take on this is thefollowing (please take me seriously at your on risk):myself, I'm a mathematician and I'm dabbling withrepresenting concepts and algorithms of representationtheory and category theory in Scala. For me, it isreally great to see that nowadays well-typed (functional)programming languages such as Scala (or Haskell, ...)are available and actually usable for such purposesfrom a very practical perspective (in particular wrt.performance and availability of "practical" libraries,in contrast to actually using a proof assistant systemor such). I'm pretty sure that I am not the only personhaving this view on Scala and that Scala will be avery interesting option in this sector. This is certainlya far cry from the billion dollar generating query-some-database business which most people probably have inmind when they think about programming on the JVM.However, from where I'm standing, there seems to exista trend and a good motivation for programming languageshaving a solid foundation in typed logic. From thatperspective, I completely understand Martin's refusalto accept a modification without communicating how itbehaves wrt. the theoretical underpinnings. You mightend up with putting lipstick on some libraries, but inthe long run it will bite us.Just my 2c,Markus
Markus (and anyone else wondering what this thread is about):
The core of the argument is about Scala's type inference / implicit selection mechanism.
In general, when it has multiple options it picks the most selective. This is usually what you want.
The argument here is whether "most selective" and "deepest subtype" are the same thing with functions (and other contravariant entities). That is, do you say
String => String
is more or less selective than
Object => String
?
With subtyping, Object => String is more selective since anywhere you can pass in String => String, an Object => String can do the job. Hence, Object => String <: String => String.
However, one can also argue that String => String is more precisely defined, and so perhaps if you had to choose between the two you ought to pick String => String (i.e. the supertype).
That's what this discussion is about--do you or do you not follow the typing relationships?
Right now, Scala just looks at the typing relationship.
In practice, it seems that many people have identified cases where subtyping is _not_ what they want; instead, even though a type parameter is marked as contravariant, when considering which item to select that is compatible, one should proceed as if it were covariant (i.e. select the deepest subtype of the argument, not of the type-parameterized class).
Without trying it this way, it is less clear whether the other case--where one really does want subtyping relationships to govern selection--would be sorely missed if the behavior was changed.
--Rex
FWIW, maybe implicit resolution should be more like virtual method
lookup.
Which again establishes that implicit lookup and overload resolution can, and should, be the same.
It seems to me that Martin also agrees with this.
So instead of focusing on motivating the implicit case maybe an easier question is why overloading works the way it does?
Btw. From a mathematical point of view it seems that overload resolution is just an application of implicit search with syntactic sugar on top :)
BR
John
Overloads are not parameterized, much less variant.
With overload, you have B <: A. The problem here is when B <: A but
T[A] <: T[B]. And, mind you, there's even the case where T[A] and T[B]
are not subtypes of each other.
Regardless, Martin says SIP. Two SIPs: one for general rule change,
and the other for declaration-site marking. I say writing that is the
most profitable use of time to those interested in this issue at this
point.