Hi all,
I'm reposting this after posting on
scala-internals where it did not get much traction.
It seems that currently when inferring a type T which requires some implicit evidence R[T],
scalac will start by inferring the most specific T possible, and upon failing to find evidence for that type
it will give up, even if evidence is present for a super type.
trait R[T]
case class A(i: Int)
object A {
implicit object RA extends R[A]
}
class B(i: Int) extends A(i)
def foo[T](x : T)(implicit ev: R[T]) = 0
println(foo(new B(1))) // infers T as B and fails to find implicit R[B]
println(foo(new B(1) : A)) // Works, but undesirable
Another Example (using spire):
import spire.algebra.AdditiveSemigroup
import spire.implicits._
sealed abstract class A(val j: Int)
object A {
implicit val ev: AdditiveSemigroup[A] = new AdditiveSemigroup[A] {
override def plus(x: A, y: A): A = (x, y) match {
case (B(l), B(r)) => B(l + r)
case _ => C(x.j + y.j)
}
}
}
case class B(i: Int) extends A(i)
case class C(i: Int) extends A(i)
B(1) + B(2) // Does not work because no evidence of AdditiveSemigroup[B]
(B(1) : A) + (B(2) : A) // Works - each B is also an A
IMO, this seriously undermines usage of subtyping to encode co-products, which is very idiomatic in scala.
I would like to propose the following change to type inference/implicit resolution:
When trying to infer a type T which appears in some implicit parameter, the compiler should successivelt try inferring supertypes of the type it starts from only failing if it reaches Any without finding suitable implicits.
In my opinion, this should not violate any type system rules, given that all type rules must still be satisfied. I could always add type annotations which make things work - so there is no correctness problem here.
I understand that this is not a formal enough definition, but my goal here is to start a discussion of this issue.
WDYT?