Existential Types Going Away?

783 views
Skip to first unread message

Daniel Spiewak

unread,
Oct 18, 2012, 11:58:02 AM10/18/12
to Scala Internals
Rumor started here: https://www.precog.com/blog-precog-2/entry/existential-types-ftw#comment-11  Mike Slinn is under the impression (apparently from Martin and Josh) that existential types are incompatible with macros (??) and are going to be removed post 2.10 (!!).  This doesn't really mesh with what I know of macros, and it doesn't make any sense to me in the context of the planned move toward a more DOT-based meta-calculus.

Can anyone shed some light on this?

Daniel

Eugene Burmako

unread,
Oct 18, 2012, 12:07:36 PM10/18/12
to <scala-internals@googlegroups.com>
I would guess that the mentioned deprecation stuff is the SIP-18 thing, according to which the compiler warns about types expressed with forSome (represented as ExistentialTypeTrees in the parser, dubbed "existential types" in SLS).

As to macros, there was (and is) a bug in the tracker that is caused by an internal implementation detail of existentials. Inside the compiler we have a wider array of tools to deal with this kind of situations, but the reflection API isn't rich enough, therefore a macro didn't work. Other than that, I don't know of potential glitches caused by existentials w.r.t macros.

martin odersky

unread,
Oct 18, 2012, 12:16:59 PM10/18/12
to scala-i...@googlegroups.com
On Thu, Oct 18, 2012 at 6:07 PM, Eugene Burmako <eugene....@epfl.ch> wrote:
I would guess that the mentioned deprecation stuff is the SIP-18 thing, according to which the compiler warns about types expressed with forSome (represented as ExistentialTypeTrees in the parser, dubbed "existential types" in SLS).

As to macros, there was (and is) a bug in the tracker that is caused by an internal implementation detail of existentials. Inside the compiler we have a wider array of tools to deal with this kind of situations, but the reflection API isn't rich enough, therefore a macro didn't work. Other than that, I don't know of potential glitches caused by existentials w.r.t macros.

Daniel,

Existential types do not behave nicely under tree rewritings, which is a potential troublespot for macros that rewrite trees carrying existential types. The problem has no easy fix. That was the "incompatibility with macros" that Josh and I were referring to. (In detail the problem is that "opened existentials" lead to type skolems and some common tree transformations such as substitutions do not extend cleanly to type skolems.)

Related to DOT: What we would like to achieve is that existential types are modelled as dependent types. This should be possible with all existential types that can today be expressed as wildcard types. If we do this (none of this is decided yet), we will probably need to disallow existential types that cannot be expressed as wildcards. Incidentally, these are the ones SIP 18 warns you about.

Here's what the scaladoc says for language.existentials:


Only where enabled, existential types that cannot be expressed as wildcard types can be written and are allowed in inferred types of values or return types of methods. Existential types with wildcard type syntax such as List[_], or Map[String, _] are not affected.

Why keep the feature? Existential types are needed to make sense of Java’s wildcard types and raw types and the erased types of run-time values.

Why control it? Having complex existential types in a code base usually makes application code very brittle, with a tendency to produce type errors with obscure error messages. Therefore, going overboard with existential types is generally perceived not to be a good idea. Also, complicated existential types might be no longer supported in a future simplification of the language.


Cheers

 - Martin


Daniel Spiewak

unread,
Oct 18, 2012, 12:30:47 PM10/18/12
to scala-i...@googlegroups.com
I'll buy this.  I can see why tree-rewriting would be annoying in this context.  It doesn't strike me as something that is undecidable, but it would definitely require some machinery that is specific to existential types of the "forSome" form.

I've actually gone off on forSome in recent days, so I'm less opposed to giving a warning on it (so long as the same warning doesn't affect existentials expressed as dependent types, which is the more useful form).  Obviously, forSome is still needed for Java compatibility, but use in any other context is extremely suspect because of subtyping.  For example, something like this:

def foo(xs: (Option[A], A) forSome { type A })

Doesn't actually mean anything, since Tuple2 and Option are covariant type constructors.

We do actually use forSome types in a couple places inside the Precog code base, but it is always accompanied by prolific comments explaining why the restriction is actually functioning the way it seems to be.

Daniel

Miles Sabin

unread,
Oct 18, 2012, 12:31:07 PM10/18/12
to scala-i...@googlegroups.com
On Thu, Oct 18, 2012 at 5:16 PM, martin odersky <martin....@epfl.ch> wrote:
> Only where enabled, existential types that cannot be expressed as wildcard
> types can be written and are allowed in inferred types of values or return
> types of methods. Existential types with wildcard type syntax such as
> List[_], or Map[String, _] are not affected.

Are you saying that this is a syntactic level thing only?

Suppose we have an existential which isn't immediately in that form, eg.,

type Problematic = (List[T], Option[T]) forSome { type T }

we can rewrite this to an equivalent form which can be expressed using
a syntactic wildcard with the help of an alias,

type F[T] = (List[T], Option[T])
type Unproblematic = F[_]

Is that really all that's need to circumvent the warning?

Cheers,


Miles

--
Miles Sabin
tel: +44 7813 944 528
skype: milessabin
gtalk: mi...@milessabin.com
g+: http://www.milessabin.com
http://twitter.com/milessabin

Daniel Spiewak

unread,
Oct 18, 2012, 12:36:58 PM10/18/12
to scala-i...@googlegroups.com
I don't think it's possible to do that sort of rewriting in every case of forSome.  I'll admit that I don't know the bounds of that language feature as well as I should, so I could be mistaken.

Daniel

martin odersky

unread,
Oct 18, 2012, 12:38:55 PM10/18/12
to scala-i...@googlegroups.com
On Thu, Oct 18, 2012 at 6:31 PM, Miles Sabin <mi...@milessabin.com> wrote:
On Thu, Oct 18, 2012 at 5:16 PM, martin odersky <martin....@epfl.ch> wrote:
> Only where enabled, existential types that cannot be expressed as wildcard
> types can be written and are allowed in inferred types of values or return
> types of methods. Existential types with wildcard type syntax such as
> List[_], or Map[String, _] are not affected.

Are you saying that this is a syntactic level thing only?

Suppose we have an existential which isn't immediately in that form, eg.,

  type Problematic = (List[T], Option[T]) forSome { type T }

we can rewrite this to an equivalent form which can be expressed using
a syntactic wildcard with the help of an alias,

  type F[T] = (List[T], Option[T])
  type Unproblematic = F[_]

Is that really all that's need to circumvent the warning?

If it is we have to strengthen the warning. The intention is 
that Unproblematic would be flagged, because existential types over type aliases would also be disallowed. A Java wildcard type must be a class or a trait that contains
existentially bound type parameters. Everything else is problematic.

Cheers

 - Martin

 

martin odersky

unread,
Oct 18, 2012, 12:40:59 PM10/18/12
to scala-i...@googlegroups.com
On Thu, Oct 18, 2012 at 6:30 PM, Daniel Spiewak <djsp...@gmail.com> wrote:
I'll buy this.  I can see why tree-rewriting would be annoying in this context.  It doesn't strike me as something that is undecidable, but it would definitely require some machinery that is specific to existential types of the "forSome" form.

It's not undecidable. It just requires a lot of complicated machinery for something that is inherently fragile and hard to use. Given the choice to work on that machinery or work on achieving a simplification and unification of features it's clear that the second alternative is more attractive.

Cheers

 - Martin

Miles Sabin

unread,
Oct 18, 2012, 12:49:07 PM10/18/12
to scala-i...@googlegroups.com
On Thu, Oct 18, 2012 at 5:38 PM, martin odersky <martin....@epfl.ch> wrote:
> If it is we have to strengthen the warning. The intention is
> that Unproblematic would be flagged, because existential types over type
> aliases would also be disallowed. A Java wildcard type must be a class or a
> trait that contains
> existentially bound type parameters. Everything else is problematic.

Is the intention that the following be allowed by DOT?

trait Foo {
type T
type F[T] = (List[T], Option[T])
type Unproblematic = F[T]

Adriaan Moors

unread,
Oct 18, 2012, 12:59:50 PM10/18/12
to scala-i...@googlegroups.com
Yes, as long as you can express it only with type members, which is true in this case.

Stefan Zeiger

unread,
Oct 19, 2012, 6:10:38 AM10/19/12
to scala-i...@googlegroups.com
On 2012-10-18 18:16, martin odersky wrote:
>
> Related to DOT: What we would like to achieve is that existential
> types are modelled as dependent types. This should be possible with
> all existential types that can today be expressed as wildcard types.
> If we do this (none of this is decided yet), we will probably need to
> disallow existential types that cannot be expressed as wildcards.
> Incidentally, these are the ones SIP 18 warns you about.

So far I had simply imported scala.language.existentials when I got a
warning without looking at the details. Since I was pretty sure that I'm
only using simple wildcard existentials in the Slick codebase, I took a
closer look and discovered that the warnings are of the following kind:

trait A

trait B[T]

case class C(a: A, b: B[_])


[warn]
C:\Users\szeiger\code\slick\src\main\scala\scala\slick\driver\JdbcStatementBuilderComponent.scala:637:
inferred existential type Option[(scala.slick.driver.A,
scala.slick.driver.B[_$20])] forSome { type _$20 }, which cannot be
expressed by wildcards, should be enabled
[warn] by making the implicit value language.existentials visible.
[warn] case class C(a: A, b: B[_])
[warn] ^

Why does the compiler infer that strange existential type?

-sz

Lukas Rytz

unread,
Oct 19, 2012, 6:30:25 AM10/19/12
to scala-i...@googlegroups.com
Unapply is the cause:

          case <synthetic> def unapply(x$0: C): Option[(A, B[_$1])] forSome { type _$1 } = if (x$0.==(null))
            scala.this.None
          else
            Some.apply[(A, B[_$1])](Tuple2.apply[A, B[_$1]](x$0.a, x$0.b));


The problematic return type "Option[(A, B[_$1])] forSome { type _$1 }" is not generated together with
the unapply method, but inferred by the compiler (when the generated method is type-checked).

In the body of the unapply, I don't know what the type name _$1 is bound to..

martin odersky

unread,
Oct 19, 2012, 6:51:28 AM10/19/12
to scala-i...@googlegroups.com
Good catch. We should provide the return type explicitly. Can you file a ticket? 

Thanks

 - Martin

Lukas Rytz

unread,
Oct 19, 2012, 7:19:30 AM10/19/12
to scala-i...@googlegroups.com
Good catch. We should provide the return type explicitly. Can you file a ticket? 

Paul Phillips

unread,
Oct 19, 2012, 10:47:11 AM10/19/12
to scala-i...@googlegroups.com


On Fri, Oct 19, 2012 at 3:30 AM, Lukas Rytz <lukas...@epfl.ch> wrote:
The problematic return type "Option[(A, B[_$1])] forSome { type _$1 }" is not generated together with
the unapply method, but inferred by the compiler (when the generated method is type-checked).

That reminds me of this recent ticket, for which I assembled the following illustrative source.  The map and pattern match are mimicking the code generated in a for comprehension.  The general situation is that this compiles:

  for (x <- foo) { val p = x.p ; p(x) }

where this does not:

  for (x <- foo ; p = x.p) { p(x) }

with the difference being again an inferred existential type in synthetic code.


trait Z {
  trait Lower
  trait Upper {
    val result: Z
    def inverse: result.Lower
  }

  val upper: Upper
  val pred: Lower => Unit

  def ok1 = Seq(upper) map { u => u.result.pred(u.inverse) }
  def ok2 = Seq(upper) map { u =>              u.result.pred match { case p               => p(u.inverse) } }
  def ok3 = Seq(upper) map { u =>                  u.inverse match { case x               => u.result.pred(x) } }
  def ok4 = Seq(upper) map { u => (u.result.pred, u.inverse) match { case (p, x)          => p(x) } }
  def ok5 = Seq(upper) map { u => (u.result.pred, u        ) match { case (p, u1: u.type) => p(u1.inverse) } }
  def nok = Seq(upper) map { u => (u.result.pred, u        ) match { case (p, u1)         => p(u1.inverse) } }

  /**

  a.scala:15: error: type mismatch;
   found   : u1.result.Lower
   required: u.result.Lower
    def nok = Seq(upper) map { u => (u.result.pred, u) match { case (p, u1) => p(u1.inverse) } }
                                                                                    ^
  **/
}

Paul Phillips

unread,
Oct 19, 2012, 11:18:21 AM10/19/12
to scala-i...@googlegroups.com


On Fri, Oct 19, 2012 at 7:47 AM, Paul Phillips <pa...@improving.org> wrote:
def ok5 = Seq(upper) map { u => (u.result.pred, u ) match { case (p, u1: u.type) => p(u1.inverse) } } 
def nok = Seq(upper) map { u => (u.result.pred, u ) match { case (p, u1) => p(u1.inverse) } }

But in this case, as demonstrated above, it seems like it would work if we would hang onto the singleton type of u, which we do in the untupled case, but not once it finds a friend in p.

Suddenly the relevance of this very old ticket moved up a bit in my mind:

  "Sugar for tuples does not preserve singleton types"

It seems I have even commented to this effect before, but with less awareness:

  "Slightly surprisingly, this looks unrelated to tuple sugar per se, but a consequence of type inference in a pattern context."

iulian dragos

unread,
Oct 19, 2012, 12:12:30 PM10/19/12
to scala-i...@googlegroups.com
On Thu, Oct 18, 2012 at 5:58 PM, Daniel Spiewak <djsp...@gmail.com> wrote:
Rumor started here: https://www.precog.com/blog-precog-2/entry/existential-types-ftw#comment-11  Mike Slinn is under the impression (apparently from Martin and Josh) that existential types are incompatible with macros (??) and are going to be removed post 2.10 (!!).  This doesn't really mesh with what I know of macros, and it doesn't make any sense to me in the context of the planned move toward a more DOT-based meta-calculus.

Just a quick note related to your blog post (that I thoroughly enjoyed, we need more of this!): you are using abstract types, not existentials. So unless I missed something, Mike's comment, while factually true, does not invalidate your approach.

iulian
 

Can anyone shed some light on this?

Daniel



--
« Je déteste la montagne, ça cache le paysage »
Alphonse Allais

Daniel Spiewak

unread,
Oct 19, 2012, 12:31:03 PM10/19/12
to scala-i...@googlegroups.com
Abstract types in the way that I used them are existential types, even though SLS and scalac may not refer to them as such.

Daniel

Paolo G. Giarrusso

unread,
Oct 19, 2012, 1:00:27 PM10/19/12
to scala-i...@googlegroups.com


Il giorno venerdì 19 ottobre 2012 13:20:05 UTC+2, Lukas Rytz ha scritto:

Good catch. We should provide the return type explicitly. Can you file a ticket? 


I'm a bit confused; if annotating a different return type will still make the code typecheck, it's confusing that the two types are unrelated, and in fact in this case I think they should be equivalent in the first place.

The two possible are:
Option[(A, B[_])], that is Option[(A, B[_$1]  forSome { type _$1 })] (note that _$1 is bound right outside of B, though this might look confusing).
and
Option[(A, B[_$1])] forSome { type _$1 }
Now, I would say that since both Option and tuples are covariant, these two types should in fact be equivalent. If B is not invariant, _$1 can be replaced in both cases by either Any (if B is covariant) or Nothing (if B is contravariant). In both cases, the type used for replacement is the same; so we see that the two types are equivalent. Scalac apparently can't figure that out in the contravariant case:

> trait B[-T] {def f: T => String}
> case class C(a: A, b: B[_])
warning: inferred existential type Option[(A, B[_$1])] forSome { type _$1 }, which cannot be expressed by wildcards,  should be enabled by [...]

possibly because it is not too eager to collapse skolems to Any or Nothing. (Note: I didn't check the output of typer).

I believe that in fact the two types should just be declared equivalent - by extrapolating from what I studied, I think that existential quantification should be able to freely float out (and in) of covariant type constructors, as it happens in logic, since existential quantifications in types and propositions should be related by the Curry-Howard correspondence (right?).

If I didn't get this wrong I could report a bug, but I'm guessing it'd be a WONTFIX (unless something goes wrong with the DOT plan).

Cheers,
Paolo

Paul Phillips

unread,
Oct 19, 2012, 1:09:14 PM10/19/12
to scala-i...@googlegroups.com


On Fri, Oct 19, 2012 at 10:00 AM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
Option[(A, B[_])], that is Option[(A, B[_$1]  forSome { type _$1 })] (note that _$1 is bound right outside of B, though this might look confusing).
and
Option[(A, B[_$1])] forSome { type _$1 }

For some context, see


How to write the type returned by f?

  class Map[T1, T2] { class Inner }
  def f[T1, T2] = { val m = new Map[T1, T2] ; new m.Inner }

Don't do it!

  var x3: Map[t1, t2]#Inner forSome { type t1 ; type t2; } = f[String, Int]

Daniel Spiewak

unread,
Oct 19, 2012, 1:19:47 PM10/19/12
to scala-i...@googlegroups.com
You don't even have to go that far.  What you just illustrated is a general issue with the let-binding of type parameters on functions.  Your solution is to use existential types (which sort of does the right thing), but the more correct solution would be to apply a higher-rank type and release the binding of T1 and T2 from the function def.  In other words, what you really want here is an unbound universal, rather than an unbound existential.

Daniel

Paul Phillips

unread,
Oct 19, 2012, 1:25:34 PM10/19/12
to scala-i...@googlegroups.com


On Fri, Oct 19, 2012 at 10:19 AM, Daniel Spiewak <djsp...@gmail.com> wrote:
You don't even have to go that far.  What you just illustrated is a general issue with the let-binding of type parameters on functions.  Your solution is to use existential types (which sort of does the right thing), but the more correct solution would be to apply a higher-rank type and release the binding of T1 and T2 from the function def.  In other words, what you really want here is an unbound universal, rather than an unbound existential.

Right, so who wants language support for...

scala> trait Poly { def apply[T1, T2] : Any }
defined trait Poly

scala> def f = new Poly { def apply[T1, T2] = { val m = new Map[T1, T2] ; new m.Inner } }
f: Poly{def apply[T1, T2]: Map[T1,T2]#Inner}

scala> f[String,Int]
res0: Map[String,Int]#Inner = Map$Inner@770440dd

Paolo G. Giarrusso

unread,
Oct 19, 2012, 4:23:12 PM10/19/12
to scala-i...@googlegroups.com
That's a structural type, right? Is apply invoked through reflection? 
scala> f[String,Int]
res0: Map[String,Int]#Inner = Map$Inner@770440dd

If you're talking about proper support for anonymous polymorphic functions, I'm pretty sure many people would like that.

I'd prefer the following:

trait PolyFunction[R[_]] { def apply[T1]: R[T1] }
def f[T]: (T, Int) = ???
> f _
res1: PolyFunction[[T](T, Int)] = <polymorphic function>

And yes, I know this requires higher-order inference to deduce R.

It'd be nice to also abstract over _dependent_ functions, but unless you rely on currying you need a superlinear number of DependentFunction<Length of first parameter list><length on second parameter list>...<length of Nth parameter list>.

In most cases, I think, it'd even make sense to be able to write polymorphic values:
val f[T]: (T, Int) = (???, Int)
Since T is erased and the method is nullary (so it takes no argument whose type might depend on T), it cannot play a role in the computation, so I could choose to precompute the value. This pattern is encoded manually through casts in the library in various places (src/library/scala/PartialFunction.scala, PartialFunction.partial_pf). I admit that maybe this pattern does not deserve language support, but conceptually allowing this would in fact simplify the language IMHO. But that's an orthogonal point.

iulian dragos

unread,
Oct 20, 2012, 1:23:23 PM10/20/12
to scala-i...@googlegroups.com
On Fri, Oct 19, 2012 at 6:31 PM, Daniel Spiewak <djsp...@gmail.com> wrote:
Abstract types in the way that I used them are existential types, even though SLS and scalac may not refer to them as such.

You're right, abstract types have existential type. I got confused by the deprecation discussion, which does not concern them.

iulian
 


Daniel


On Fri, Oct 19, 2012 at 10:12 AM, iulian dragos <jagu...@gmail.com> wrote:


On Thu, Oct 18, 2012 at 5:58 PM, Daniel Spiewak <djsp...@gmail.com> wrote:
Rumor started here: https://www.precog.com/blog-precog-2/entry/existential-types-ftw#comment-11  Mike Slinn is under the impression (apparently from Martin and Josh) that existential types are incompatible with macros (??) and are going to be removed post 2.10 (!!).  This doesn't really mesh with what I know of macros, and it doesn't make any sense to me in the context of the planned move toward a more DOT-based meta-calculus.

Just a quick note related to your blog post (that I thoroughly enjoyed, we need more of this!): you are using abstract types, not existentials. So unless I missed something, Mike's comment, while factually true, does not invalidate your approach.

iulian
 

Can anyone shed some light on this?

Daniel



--
« Je déteste la montagne, ça cache le paysage »
Alphonse Allais

Reply all
Reply to author
Forward
0 new messages