Proposed architecture of Scala's reflection library.

1,082 views
Skip to first unread message

martin odersky

unread,
Jul 8, 2011, 7:25:54 AM7/8/11
to scala-internals
I was thinking a lot recently about how to get a good design for Scala reflection. Here are my thoughts so far:

Scala's reflection should achieve the analogue of Java's reflection, but with Scala's full types.
I.e., there should be a way to

 - get the Scala runtime class from an instance,
 - obtain Scala members from Scala classes, which carry their original Scala types,
 - dereference Scala fields, and invoke Scala methods,
 - query Scala types of these members, for subtype relationships and others.

In addition we want integration into manifests and lifted code:

 - the type descriptor in a manifest should be a reflect.Type.
 - the tree lifted in a Code[T] expression should be a reflect.Tree.

From all this it is clear that the reflection library needs to use a lot of logic from the compiler. It uses isomorphic trees, types, and symbols, needs the same mechanism to deserialize pickle information, needs the same logic for subtype tests, as-seen-from, find-members, etc. It would be a shame to have to duplicate that logic. At the same time, the compiler also has many operations that do not make sense in a reflective setting. For instance, there are operations dealing with compiler phases, or source trees. Also, the number and complexity of compiler operations makes them unsuitable for a general library interface.

So, we need to re-use a lot of logic from the compiler, but need to hide it behind sparser and cleaner interfaces.  I now present a design that achieves this.

The idea is to pack most logic into two packages:

reflect.internal  contains the implementations
reflect.api  contains the public interfaces.

(Question: Better name for reflect.api? reflect.public would be nice but that makes it inaccessible from Java).

Classes in both packages are arranged with the cake pattern, with a "Universe" class mixing in container traits such as Types, Symbols, Names, Trees, Positions, which contain the actual types and associated operations. reflect.internal classes inherit from their corresponding reflect.api classes. The reflect.api classes are mostly pure interfaces which contain abstract types and extractors. Here's an example: In reflect.internal there is a Symbols trait that
contains all symbol types. Its outline is as follows:

  package scala.reflect.internal
  trait Symbols extends generic.Symbols {
     class Symbol extends AbsSymbol { // lots of definitions }
  }

the API class abstracts Symbol into an abstract type:

 package scala.reflect.api
 trait Symbols {
    type Symbol <: AbsSymbol
    class AbsSymbol {
       // declarations of all members of a member
       // that should be accessible from reflection;
       // typically these are all abstract
    }
 }

If the concrete class is a case class we also add an extractor to the API trait.

We still need a concrete instantiation of a universe. This is done as follows. First, define
a class RunTimeUniverse that extends the internal universe (which is called SymbolTable).

  package scala.reflect

  class RuntimeUniverse extends internal.SymbolTable {
     // definitions that fill in all abstract members of SymbolTable
     // there are about a dozen of these
  }

Finally, define a concrete universe in reflect as follows:

  package object reflect {
     val universe: api.Universe = new RuntimeUniverse
     type Type = universe.Type
     type Symbol = universe.Symbol
     type Tree = universe.Tree
     type Name = universe.Name
     type Position = universe.Position
  }

reflect.universe inherits all mechanisms from reflect.internal.SymbolTable. But its exposed interface is just
reflect.api.Universe. The reflect package object also defines convenient type aliases for Types, Symbols etc that all refer to the universe member types.

Some questions:

The main questions in this architecture concern the role of trees. We could treat them like the other types, i.e. abstract out all tree types with abstract types and all tree pattern matchings with extractors. But that would give us >40 abstract types and extractors, and pattern matching on abstract trees would become slow. At the same time, unlike Symbols and Types, there's hardly anything to hide in trees. They are all pretty canonical with very few extraneous methods and no extraneous fields.

So I think it would be easiest if the case class definitions of all trees were made concrete in reflect.api. (they would still refer to abstract Symbols, Types, Names, Positions). Compiler specific methods could be taken out of Trees and put in an implicit wrapper class.

There's only one thing that needs cleaning up: Currently trees display themselves via toString as source programs. This is convenient for composing error messages and compiler diagnostics. But it would be more canonical to have them represented as plain case classes in reflect.api. This means there should be some other method in the compiler that applies a treePrinter to a tree, and toString on a tree should expose the actual case classes. I think that is altogether a better design.

How to get there? reflect.internal is already done. reflect.api would resemble largely the current reflect.generic, but would drop all concrete implementations which were there so that we could do unpickling. Unpickling will be done like all other operations in reflect.internal. reflect.internal traits would now inherit from reflect.api (they did not use to inherit from reflect.generic).

Thoughts?

 -- Martin

Kevin Wright

unread,
Jul 8, 2011, 8:04:16 AM7/8/11
to scala-i...@googlegroups.com
I've also looked at reflection, basing it on scalap (https://github.com/scalaj/scalaj-reflect)
Though I held off on further work once I saw the new structures appear in 2.9

A few of the more interesting things I found I had to consider:

- How best to deal with boxed/unboxed primitives, the ability to be explicit with types like java.lang.Integer in scala code makes this a bit challenging

- Should type aliases be preserved in reflections?  Consider something like predef.String for example - which will map to a java.lang.String for a JVM target, but not for MSIL.  This is especially noteworthy when such an alias appears in a path-dependent type.  My current feeling is that aliases should be preserved as-is, with a separate method to canonicalise types, this isn't so different from working with symlinks in file systems.

- Abstract type members vs type parameters. An interesting challenge, especially when dealing with aliases that translate between the two schemes

- Higher-Kinded types and type lambdas - enough said!

- Existential types and bounds - Upper and lower bounds in a wildcardType are currently only available via the toString method

- Reification - When converting from Java's reflection types, it would be nice if such conversion could take a type param, allowing any erasure to be reversed.  This offers the potential for a createInstance method with a statically safe return value


--
Kevin Wright

gtalk / msn : kev.lee...@gmail.com
google+: http://gplus.to/thecoda
mail: kevin....@scalatechnology.com
vibe / skype: kev.lee.wright
quora: http://www.quora.com/Kevin-Wright
twitter: @thecoda

"My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger" ~ Dijkstra

Adriaan Moors

unread,
Jul 8, 2011, 8:30:57 AM7/8/11
to scala-i...@googlegroups.com
- Should type aliases be preserved in reflections?  Consider something like predef.String for example - which will map to a java.lang.String for a JVM target, but not for MSIL.  This is especially noteworthy when such an alias appears in a path-dependent type.  My current feeling is that aliases should be preserved as-is, with a separate method to canonicalise types, this isn't so different from working with symlinks in file systems.
this is one of the difference with the full compiler, where you can look at the type of a symbol at various points in time (i.e., at a specific phase), whereas reflection will have the type information after the typer&superaccessor phases (since pickler is run right after, and I assume reflection will get its information from the pickled information)

luckily, I think the only other phase after which we might like to see type information is erasure, which we can get using Java reflection --no need to duplicate the transformations in the compiler

in any case, references to types are called TypeRef's, irrespective of the kind of type they're referring to (class, abstract type, existentially bound type, type alias) -- you can dereference a type alias using `dealias` or `normalize` (the latter does more than simple type alias expansion)

 
- Abstract type members vs type parameters. An interesting challenge, especially when dealing with aliases that translate between the two schemes 

- Higher-Kinded types and type lambdas - enough said!

- Existential types and bounds - Upper and lower bounds in a wildcardType are currently only available via the toString method
I don't see the challenge with providing any of these -- we're simply dumping what the compiler already knows anyway.

On a more conceptual level, I guess I see where you're going with this: if you want to reason about these types it could get more complicated, but that would depend on the concrete application (of reflection)

- Reification - When converting from Java's reflection types, it would be nice if such conversion could take a type param, allowing any erasure to be reversed.  This offers the potential for a createInstance method with a statically safe return value
as far as I understand, we'd typically skip Java's type info, and going straight to the source: the type information that the compiler has pickled (serialized) for us in classfile annotations (although, as I mentioned before, java.reflect info could come in handy to provide a minimal atPhase(erasure.next) mechanism)

martin odersky

unread,
Jul 8, 2011, 8:45:37 AM7/8/11
to scala-i...@googlegroups.com
As Adriaan alluded to, we want to follow in most (all?) respects the same scheme as the compiler for the reflection. More precisely, the same way the compiler sees the world at the phase where it pickles its internal information. No need to reinvent the wheel. The compiler had 8 years to come up with a mature design, so by default we should just adopt the same for reflection.

Cheers

 -- Martin

Simon Ochsenreither

unread,
Jul 8, 2011, 1:49:43 PM7/8/11
to scala-i...@googlegroups.com
Hi,

> - get the Scala runtime class from an instance,
> - obtain Scala members from Scala classes, which carry their original Scala
> types,
> - dereference Scala fields, and invoke Scala methods,
> - query Scala types of these members, for subtype relationships and others.

> - the type descriptor in a manifest should be a reflect.Type.
> - the tree lifted in a Code[T] expression should be a reflect.Tree.

Imho there is some "enclosing" relationship missing.
For instance it is currently not possible to discover B from A
reflectively here:
object A { object B }

> The idea is to pack most logic into two packages:
>
> reflect.internal contains the implementations
> reflect.api contains the public interfaces.
>
> (Question: Better name for reflect.api? reflect.public would be nice but
> that makes it inaccessible from Java).

I would prefer just "scala.reflect" for the interface. Additionally I
wonder if it would make sense to have the implementation in a
completely different package, and exclude it from any "binary
compatibility" guarantees.

This would make it much easier to keep the code in sync with the one
in the compiler, while still providing access to those who really need
"more" features.

I think it could be a bit like the com.sum namespace in Java for
instance, which is used internally but no guarantees are made.

Independently of that it would be nice to have some sort of @beta
annotation ("This class/method/... might change in the future" like
Google has) so it would be possible to ship the reflection code as a
separate jar file with the next release to get early feedback.

(@deprecated and @migration cover the end of an API's lifecycle, it
owuld be nice to have something for the start of an API, too.)

Reflection is used for very different things and some people do weird
things with it, so it would be nice to get as much feedback as
possible. Reflection is hard to get right (look at Java, things are
broken and never got fixed there), so before setting things in stone
it would be nice to get it in the hands of as much people as possible,
not just those running trunk.

Thanks and bye!

Simon


Alex Cruise

unread,
Jul 8, 2011, 9:24:23 PM7/8/11
to scala-i...@googlegroups.com
On Fri, Jul 8, 2011 at 10:49 AM, Simon Ochsenreither <si...@ochsenreither.de> wrote:
Imho there is some "enclosing" relationship missing.
For instance it is currently not possible to discover B from A reflectively here:
 object A { object B }

Yeah, what I'd really like to do is handle this scenario, which AFAICT everyone who's playing with ScalaSigParser has punted on so far:
trait OuterTrait {
  abstract class InnerAbstract(i: Int)
}

object OuterObject extends OuterTrait {
  case class InnerConcrete(i: Int) extends InnerAbstract(i)
}

val elephant = OuterObject.InnerConcrete(123)

val burrow: String = serialize(elephant) // This isn't too hard, except I can't think of any good general way to serialize the $outer

val elephant = deserialize[OuterObject.InnerConcrete](burrow)
// And that is where my claim falls down.

-0xe1a

Rafael de F. Ferreira

unread,
Jul 9, 2011, 9:56:42 PM7/9/11
to scala-i...@googlegroups.com
Perhaps checking whether the library allows for the scenarios
described in Bracha's Mirrors paper would be an interesting benchmark
for the overall design.

Another relevant use case would be to see if the library enables (and
make it easy, ideally) to write mapping libraries akin to Rogue or
Lift's records without having to resort to Field types and query DSLs
(versus ordinary val/var's and for comprehensions, similar to LINQ).

--
Rafael de F. Ferreira.
http://www.rafaelferreira.net/

martin odersky

unread,
Jul 11, 2011, 4:58:08 AM7/11/11
to scala-i...@googlegroups.com
On Fri, Jul 8, 2011 at 7:49 PM, Simon Ochsenreither <si...@ochsenreither.de> wrote:
Hi,

 - get the Scala runtime class from an instance,
 - obtain Scala members from Scala classes, which carry their original Scala
types,
 - dereference Scala fields, and invoke Scala methods,
 - query Scala types of these members, for subtype relationships and others.
 - the type descriptor in a manifest should be a reflect.Type.
 - the tree lifted in a Code[T] expression should be a reflect.Tree.

Imho there is some "enclosing" relationship missing.
For instance it is currently not possible to discover B from A reflectively here:
 object A { object B }

Agreed. The above was not meant to be a complete list, just to give some ideas what reflection is about.

The idea is to pack most logic into two packages:

reflect.internal  contains the implementations
reflect.api  contains the public interfaces.

(Question: Better name for reflect.api? reflect.public would be nice but
that makes it inaccessible from Java).
I would prefer just "scala.reflect" for the interface.

scala.reflect is the shorthand interface. So there will be a

scala.reflect.Type   which is an alias of    scala.reflect.RuntimeUniverse.Type

RuntimeUniverse     itself has type   scala.reflect.api.Universe

api.Universe has a mixin trait api.Types, which contains the an abstract type definition of class Type.

I think it's best to package the whole cake pattern setup in a subpackage.

Additionally I wonder if it would make sense to have the implementation in a completely different package, and exclude it from any "binary compatibility" guarantees.


Yes, good point. We have to figure out how to package these things. package reflect.internal (or whatever it is) should certainly not be subject to binary compatibility constraints.

 
This would make it much easier to keep the code in sync with the one in the compiler, while still providing access to those who really need "more" features.

I think it could be a bit like the com.sum namespace in Java for instance, which is used internally but no guarantees are made.

Yes. Note that one way to achieve that is to establish the convention that packages named `internal` come with no guarantees. Another way is to put it into a different package root (but under which name?).
 
Independently of that it would be nice to have some sort of @beta annotation ("This class/method/... might change in the future" like Google has) so it would be possible to ship the reflection code as a separate jar file with the next release to get early feedback.

Agreed.

(@deprecated and @migration cover the end of an API's lifecycle, it owuld be nice to have something for the start of an API, too.)

Cheers

 -- Martin

martin odersky

unread,
Jul 11, 2011, 5:02:37 AM7/11/11
to scala-i...@googlegroups.com
On Sun, Jul 10, 2011 at 3:56 AM, Rafael de F. Ferreira <raf...@rafaelferreira.net> wrote:
Perhaps checking whether the library allows for the scenarios
described in Bracha's Mirrors paper would be an interesting benchmark
for the overall design.

Did you have a specific one in mind here?
 
Another relevant use case would be to see if the library enables (and
make it easy, ideally) to write mapping libraries akin to Rogue or
Lift's records without having to resort to Field types and query DSLs
(versus ordinary val/var's and for comprehensions, similar to LINQ).

Yes, agreed. We plan to have the library out in an early milestone, so that people can experiment with it before it gets baked in a release.

Cheers

 -- Martin
 
--

Josh Suereth

unread,
Jul 11, 2011, 8:13:07 AM7/11/11
to scala-i...@googlegroups.com

one of my concerns with the cake pattern is ensuring the layered types are referencable outside the universe they are contained within.

Although x.T forSome { val x : Universe } or val z = ImplObj.T can work, I know that this typing can be confusing for those new to scala.

As long as the API package has stable identifiers to the types, I think the approach is great.

Miles Sabin

unread,
Jul 11, 2011, 8:49:16 AM7/11/11
to scala-i...@googlegroups.com
On Mon, Jul 11, 2011 at 1:13 PM, Josh Suereth <joshua....@gmail.com> wrote:
> one of my concerns with the cake pattern is ensuring the layered types are
> referencable outside the universe they are contained within.
>
> Although x.T forSome { val x : Universe } or val z = ImplObj.T can work, I
> know that this typing can be confusing for those new to scala.

Dependent method types enabled by default would help enormously ...

Cheers,


Miles

--
Miles Sabin
tel: +44 7813 944 528
gtalk: mi...@milessabin.com
skype: milessabin
http://www.chuusai.com/
http://twitter.com/milessabin

Tiark Rompf

unread,
Jul 11, 2011, 9:16:19 AM7/11/11
to scala-i...@googlegroups.com
What are the consequences of having one global RuntimeUniverse instance in terms of memory use?

Some applications use reflection during initialization but not afterwards. For those it will be important to not leak memory for reflection data (i.e. Symbol or Type instances) that is no longer used.

Are there other pieces of the compiler we might want to expose in the future? The ICode layer could be a nice basis for a bytecode manipulation library for example.

Cheers,
- Tiark

Rafael de F. Ferreira

unread,
Jul 11, 2011, 10:44:58 AM7/11/11
to scala-i...@googlegroups.com
On Mon, Jul 11, 2011 at 6:02 AM, martin odersky <martin....@epfl.ch> wrote:
>
>
> On Sun, Jul 10, 2011 at 3:56 AM, Rafael de F. Ferreira
> <raf...@rafaelferreira.net> wrote:
>>
>> Perhaps checking whether the library allows for the scenarios
>> described in Bracha's Mirrors paper would be an interesting benchmark
>> for the overall design.
>>
> Did you have a specific one in mind here?

No specific example in mind. But it would be nice to be able to write
code such as an object browser that can work with a live process (via
RTTI) or with a memory dump (via specific implementations of the
reflection interfaces), or a class browser that work in-process or
remotely. IMO Mirrors are just an application of traditional software
design principles to the problem of reflection.

>
>>
>> Another relevant use case would be to see if the library enables (and
>> make it easy, ideally) to write mapping libraries akin to Rogue or
>> Lift's records without having to resort to Field types and query DSLs
>> (versus ordinary val/var's and for comprehensions, similar to LINQ).
>>
> Yes, agreed. We plan to have the library out in an early milestone, so that
> people can experiment with it before it gets baked in a release.
>

Another important case, neglected in java.lang.reflect, is the
reflection of parameter names. This is such a sore spot for Java that
a project was created to hack around it:
http://paranamer.codehaus.org/.

Cheers.

Paul Phillips

unread,
Jul 11, 2011, 1:00:23 PM7/11/11
to scala-i...@googlegroups.com, martin odersky
As a code review it's going to get too long, so let's give it a better chance by having me send some shorter pieces to the list where I can maybe draw in some other opinions too, since I doubt many of you surf fisheye for comments. I could really use some kind of futuristic brain dump device right about now.


StandardDefinitions:

def RootPackage: Symbol
def RootClass: Symbol
// Is the root package useful reflectively? The only purpose it
// serves that I'm aware of is at the source level, when you need
// to disambiguate "import foo" from "import _root_.foo".
// Reflectively it just adds a layer of indirection to RootClass.

def EmptyPackage: Symbol
def EmptyPackageClass: Symbol
def ScalaPackage: Symbol
def ScalaPackageClass: Symbol
// Having "packages" and "package classes" exposed adds semantic
// confusion for no real gain, because "package as term" and "package as type"
// are fictions we create to fit with the scala model. It is confusing
// enough inside the compiler.
//
// They have the same members:
// scala> definitions.ScalaPackage.tpe.members.toSet == definitions.ScalaPackageClass.tpe.members.toSet
// res0: Boolean = true
//
// All of them are owned by the package class (or the "package object class"
// for those defined there, for even more confusion.)
//
// I think we should expose one consistent "package" construct at this level,
// and people can dig in with companionSymbol or similar if they really need to
// draw distinctions.

def signature(tp: Type): String
// 1) "signature" is too general a name, it can mean too many things.
// 2) In general the type isn't enough to generate the signature, you
// also need the symbol for which the signature is being constructed.
// (See usages in Erasure's javaSig.)

/** Is symbol one of the value classes? */
def isValueClass(sym: Symbol): Boolean

/** Is symbol one of the numeric value classes? */
def isNumericValueClass(sym: Symbol): Boolean

// These are only here because that's where they evolved, but it's inconsistent.
// They should be members of AbsSymbol, where far more specific tests
// already exist (i.e. isEmptyPackage and isEmptyPackageClass are testing
// for a single symbol.)

Names:

def newTermName(cs: Array[Char], offset: Int, len: Int): TermName
def newTermName(cs: Array[Byte], offset: Int, len: Int): TermName
def newTermName(s: String): TermName

def newTypeName(cs: Array[Char], offset: Int, len: Int): TypeName
def newTypeName(cs: Array[Byte], offset: Int, len: Int): TypeName
def newTypeName(s: String): TypeName

// I don't see the need for 2/3 of these signatures for reflection.
// Just these:
def newTermName(s: String): TermName
def newTypeName(s: String): TypeName

Paul Phillips

unread,
Jul 11, 2011, 1:04:49 PM7/11/11
to Odersky Martin, scala-i...@googlegroups.com
I'm surprised you haven't been bitten by scala's enumeration enough to avoid it here. How about I make these case objects.


object Modifier extends Enumeration {

val `protected`, `private`, `override`, `abstract`, `final`,
`sealed`, `implicit`, `lazy`, `case`, `trait`,
deferred, interface, mutable, parameter, covariant, contravariant,
preSuper, abstractOverride, local, java, static, caseAccessor,
defaultParameter, defaultInit, paramAccessor, bynameParameter = Value

}

martin odersky

unread,
Jul 11, 2011, 1:06:36 PM7/11/11
to Paul Phillips, scala-i...@googlegroups.com
Why spend 22 more classes? An enumeration is perfectly fine here.

 -- Martin


Meredith Gregory

unread,
Jul 11, 2011, 1:11:44 PM7/11/11
to scala-i...@googlegroups.com
Dearly Reflective,

i wanted to ask a little about the scope of reflection. In the context of Rosette, which was very influenced by 3-Lisp and Brown, we thought about several different kinds of reflection: structural reflection (which is the primary content of the thread below); computational reflection (we had reflective methods that gave access to the computational machinery and context reified as an actor -- this could be very much likened to the delimited continuations package); and even lexical reflection in which the every element syntax of the language had an Actor representation. 

It would be nice if there was some uniform API for each of these different flavors of reflection. Since the whole point of monads is they represent the reification of computation, i wonder if there isn't a nice monadic view of these different phenomena. i haven't thought about it, at all, but am posing the question here.

Best wishes,

--greg
--
L.G. Meredith
Managing Partner
Biosimilarity LLC
7329 39th Ave SW

martin odersky

unread,
Jul 11, 2011, 1:15:34 PM7/11/11
to Paul Phillips, scala-i...@googlegroups.com
On Mon, Jul 11, 2011 at 7:00 PM, Paul Phillips <pa...@improving.org> wrote:
As a code review it's going to get too long, so let's give it a better chance by having me send some shorter pieces to the list where I can maybe draw in some other opinions too, since I doubt many of you surf fisheye for comments.  I could really use some kind of futuristic brain dump device right about now.


StandardDefinitions:

I should say none of this is very much deliberated, so very much open to change. The difficulty I tried to overcome is to get the basic architecture and abstractions right. Now everything is up for change. In particular I would expect to expose several more operations and to hide others.
 
 def RootPackage: Symbol
 def RootClass: Symbol
 // Is the root package useful reflectively? The only purpose it
 // serves that I'm aware of is at the source level, when you need
 // to disambiguate "import foo" from "import _root_.foo".
 // Reflectively it just adds a layer of indirection to RootClass.

Probably right. If we take it out let's comment it out first before removing it, so that we can add it easily later on.
 
 def EmptyPackage: Symbol
 def EmptyPackageClass: Symbol
 def ScalaPackage: Symbol
 def ScalaPackageClass: Symbol
 // Having "packages" and "package classes" exposed adds semantic
 // confusion for no real gain, because "package as term" and "package as type"
 // are fictions we create to fit with the scala model.  It is confusing
 // enough inside the compiler.
 //
 // They have the same members:
 // scala> definitions.ScalaPackage.tpe.members.toSet == definitions.ScalaPackageClass.tpe.members.toSet
 // res0: Boolean = true
 //
Yes, it's the same as a module and a module class. The point is, module classes are not directly visible, but modules are. I guess that's why we do have modules in definitions.
 
 // All of them are owned by the package class (or the "package object class"
 // for those defined there, for even more confusion.)
 //
 // I think we should expose one consistent "package" construct at this level,
 // and people can dig in with companionSymbol or similar if they really need to
 // draw distinctions.

We can discuss that. But generally, I would try to keep deviations from the compiler as small as possible. Simply in the interest of getting things done, and also because the compiler's view is time-proven. Treating package classes as special classes simplifies things. And package objects fit nicely into this mindset.

 
 def signature(tp: Type): String
 // 1) "signature" is too general a name, it can mean too many things.
 // 2) In general the type isn't enough to generate the signature, you
 // also need the symbol for which the signature is being constructed.
 // (See usages in Erasure's javaSig.)

 /** Is symbol one of the value classes? */
 def isValueClass(sym: Symbol): Boolean

 /** Is symbol one of the numeric value classes? */
 def isNumericValueClass(sym: Symbol): Boolean

 // These are only here because that's where they evolved, but it's inconsistent.
 // They should be members of AbsSymbol, where far more specific tests
 // already exist (i.e. isEmptyPackage and isEmptyPackageClass are testing
 // for a single symbol.)

Yes, agreed. Do we do the change in the compiler as well then?
 
Names:

 def newTermName(cs: Array[Char], offset: Int, len: Int): TermName
 def newTermName(cs: Array[Byte], offset: Int, len: Int): TermName
 def newTermName(s: String): TermName

 def newTypeName(cs: Array[Char], offset: Int, len: Int): TypeName
 def newTypeName(cs: Array[Byte], offset: Int, len: Int): TypeName
 def newTypeName(s: String): TypeName

 // I don't see the need for 2/3 of these signatures for reflection.
 // Just these:
 def newTermName(s: String): TermName
 def newTypeName(s: String): TypeName

Agreed.

 -- Martin


Paul Phillips

unread,
Jul 11, 2011, 1:51:29 PM7/11/11
to martin odersky, scala-i...@googlegroups.com
On 7/11/11 10:15 AM, martin odersky wrote:
> We can discuss that. But generally, I would try to keep deviations from
> the compiler as small as possible. Simply in the interest of getting
> things done, and also because the compiler's view is time-proven.
> Treating package classes as special classes simplifies things. And
> package objects fit nicely into this mindset.

I'm not looking to modify the compiler's model, but to limit as much as
possible how many details we expose (which may occasionally lead me to
suggest changes in the model, but as few and small as possible.) It has
taken me years to wrap my head around all this stuff. Every concept we
expose directly makes all the other ones harder to figure out due to the
dark side of metcalfe's law. In the above discussed example it can mean
as little as limiting that initial chunk of symbols of Definitions to these:

def RootClass: Symbol
def EmptyPackageClass: Symbol
def ScalaPackageClass: Symbol

It's not like EmptyPackage and ScalaPackage are more than a method call
away.

Another example is access. Although it's a lot better than it used to
be, I think access is still way too complicated. I would propose doing
something like the following:

1) Stop exposing all the methods relating to access, in favor of
2) def access: Access

And then something like this, which is off the top of my head and does
not cover every imaginable base (especially the joy of all the boundary
conditions with java where we have to change the rules) but would be an
order of magnitude more usable.

class Access {
def isPublic: Boolean
def isObjectPrivate: Boolean
def isAccessibleFromSubclasses: Boolean
def isAccessibleFrom(tpe: Type): Boolean
def accessBoundary: Boolean
}

> Yes, agreed. Do we do the change in the compiler as well then?

Yes, although there is a larger cleanup in demand with respect to what
is in Definitions and what is on symbols and occasionally types.

Paul Phillips

unread,
Jul 11, 2011, 1:55:33 PM7/11/11
to martin odersky, scala-i...@googlegroups.com
On 7/11/11 10:51 AM, Paul Phillips wrote:
> class Access {
> def isPublic: Boolean
> def isObjectPrivate: Boolean
> def isAccessibleFromSubclasses: Boolean
> def isAccessibleFrom(tpe: Type): Boolean
> def accessBoundary: Boolean
> }

Oops, that last one should be "Symbol".

martin odersky

unread,
Jul 11, 2011, 2:09:21 PM7/11/11
to Paul Phillips, scala-i...@googlegroups.com

What do you gain with this? There are still the same methods,
just one more indirection.

 -- Martin

Paul Phillips

unread,
Jul 11, 2011, 2:36:23 PM7/11/11
to martin odersky, scala-i...@googlegroups.com
On 7/11/11 11:09 AM, martin odersky wrote:
> What do you gain with this? There are still the same methods,
> just one more indirection.

They aren't the same methods. No such methods exist on Symbols. To the extent that they exist at all, they are in other places and buried in implementation details. The indirection is useful because access is ridiculously complicated (or at least I assume so based on the dozens of open bugs related to it and the fact that I keep finding new ones) yet what people actually want to know is most of the time extremely simple. "Can I access that thing?" That's covers 90% of access needs. I don't think people should have to have any idea about the information presented in the following comment unless they are unflinching masochists who seek it out.

/**
* Set when symbol has a modifier of the form private[X], NoSymbol otherwise.
*
* Access level encoding: there are three scala flags (PRIVATE, PROTECTED,
* and LOCAL) which combine with value privateWithin (the "foo" in private[foo])
* to define from where an entity can be accessed. The meanings are as follows:
*
* PRIVATE access restricted to class only.
* PROTECTED access restricted to class and subclasses only.
* LOCAL can only be set in conjunction with PRIVATE or PROTECTED.
* Further restricts access to the same object instance.
*
* In addition, privateWithin can be used to set a visibility barrier.
* When set, everything contained in the named enclosing package or class
* has access. It is incompatible with PRIVATE or LOCAL, but is additive
* with PROTECTED (i.e. if either the flags or privateWithin allow access,
* then it is allowed.)
*
* The java access levels translate as follows:
*
* java private: hasFlag(PRIVATE) && !hasAccessBoundary
* java package: !hasFlag(PRIVATE | PROTECTED) && (privateWithin == enclosing package)
* java protected: hasFlag(PROTECTED) && (privateWithin == enclosing package)
* java public: !hasFlag(PRIVATE | PROTECTED) && !hasAccessBoundary
*/

Paul Phillips

unread,
Jul 11, 2011, 3:02:17 PM7/11/11
to martin odersky, scala-i...@googlegroups.com
I think it would be very beneficial in general to drive this more from
the standpoint of the uses we are trying to enable, and make every
implementation detail we are considering exposing justify itself through
specific need. (This would also clarify for me what uses we are trying
to enable.) There is significant tension between overall capability and
common case usability. That can be addressed a few different ways, but
the main outcome I hope to avoid is unnecessarily exporting of
complexity from the compiler into userland. Maybe I'm just really,
really slow, but if other people are anything like me, they would be in
trouble.

If the current API shipped tomorrow (which I realize is not proposed)
there would be an immediate need for another reflection library to wrap
and simplify the first reflection library. That's not necessarily the
end of the world, but it'd be nice if it were immediately usable out of
the box by someone already familiar with reflection and the scala type
system (at the language level, not the implementation level) without
major intellectual dislocation.

Adriaan Moors

unread,
Jul 11, 2011, 3:36:51 PM7/11/11
to scala-i...@googlegroups.com
immediately usable out of the box by someone already familiar with reflection 
and the scala type system (at the language level, not the implementation level) 
I agree this should drive the design. 

This is win-win, because the more details we expose in the API, the more it's going to hurt when the implementation needs to change.
Conversely, the more high-level and "conceptually" accurate (rather than merely 'reflecting' the implementation), the better, 
as the concepts are more stable (in time) that the implementation.

martin odersky

unread,
Jul 11, 2011, 3:53:19 PM7/11/11
to scala-i...@googlegroups.com
Agreed, if we can get it to work in time. Reflection was promised
for years and never materialized; I believe precisely because trying to dress up compiler internals in a user-friendly way leads to a morass of decisions. Not saying it can't be done, just a note that shipping something is better than waiting for the ideal library to crystalize itself.

(I would not argue this way if we would risk shipping something fundamentally broken, but that's not the case: the compiler model is time-proven, so shipping this one is a viable option. If we can simplify so much the better).

Cheers

 -- Martin

Paul Phillips

unread,
Jul 11, 2011, 4:15:38 PM7/11/11
to scala-i...@googlegroups.com, martin odersky
I don't think it will be difficult to constrain the api if we are
talking about concrete things. It can be difficult to argue against
hypotheticals though. ("Oh, but we need that because otherwise you
can't frotz the bizulator after phase refchecks.")

Concretely and not too ambitiously, I want the methods on manifest to
work correctly. <:<, =:=, typeArguments, etc. And then whatever the
working analogue of the no-longer-working methods like
Class#getGenericReturnType so we can discover stuff like Option[Int]
again. Conceptually, "method manifests".

I would be fine with (even in favor of) picking an extremely sparse api
which let us do the above and shipping as soon as possible. This seems
more likely to evolve toward good outcomes than exposing a lot of things
up front before we have much idea how they will work out in the world
outside compiler hackers.

martin odersky

unread,
Jul 11, 2011, 4:25:15 PM7/11/11
to Paul Phillips, scala-i...@googlegroups.com
Yes, exactly. My thinking was also that we should export a minimalistic set of compiler services. The challenge was to come up with an architecture that makes this possible. So I won't generally argue at all against taking stiff out; we can always put it in later.

Cheers

 -- Martin

Paul Phillips

unread,
Jul 11, 2011, 5:16:13 PM7/11/11
to scala-i...@googlegroups.com, Josh Suereth
On 7/11/11 5:13 AM, Josh Suereth wrote:
> one of my concerns with the cake pattern is ensuring the layered types
> are referencable outside the universe they are contained within.
>
> Although x.T forSome { val x : Universe } or val z = ImplObj.T can work,
> I know that this typing can be confusing for those new to scala.

This is a good point. I still struggle constantly with stupid annoying
mismatches between e.g. global.Symbol and some.other.global.Symbol which
is actually the same global but I didn't manage to swaddle the singleton
in enough blankets to survive the trip. And look at how much trouble
beginners have understanding the parser combinator architecture: I
understand because I remember being equally mystified.

On 7/11/11 5:49 AM, Miles Sabin wrote:
> Dependent method types enabled by default would help enormously ...

Also a good point. I don't know how robust they are: probably a bad
idea to count on them much, but maybe we should be talking about
promoting them up somewhere above "-Yworlds-longest-option."

Adriaan Moors

unread,
Jul 11, 2011, 5:45:30 PM7/11/11
to scala-i...@googlegroups.com
This is a good point.  I still struggle constantly with stupid annoying
mismatches between e.g. global.Symbol and some.other.global.Symbol which
is actually the same global but I didn't manage to swaddle the singleton
in enough blankets to survive the trip.  
at the same time, this is a feature that should also come in handy in the reflection library: 
different instantiations talk about incompatible views of the program (if they're compatible, they must be based off of the same path)

let's provide the right blankets for our poor users, though, by all means
 
And look at how much trouble beginners have understanding the parser combinator architecture:
I understand because I remember being equally mystified.
And so was I, although I think it was the way the cake was sliced that made the design hard to grasp in this case (for me)
(I still can't conjure up the details of the design without looking at the source)


On 7/11/11 5:49 AM, Miles Sabin wrote:
> Dependent method types enabled by default would help enormously ...

Also a good point.  I don't know how robust they are: probably a bad
idea to count on them much, but maybe we should be talking about
promoting them up somewhere above "-Yworlds-longest-option."
We wrote a paper that pushes them pretty hard, so I am reasonably confident they are ready for prime time. 

(Only half kidding: I'll do more testing before having them on by default, but 
I'm happy to work on getting that done for the next release. 
In the meantime, please assign any issues with them straight to me.)

Kevin Wright

unread,
Jul 13, 2011, 5:44:21 AM7/13/11
to scala-i...@googlegroups.com
I've put this proposal on the wiki, and also made a stab at adding some of the responses.


It's open to everyone, so please feel free to add anything I've missed :)
Old user accounts from trac are valid for login (but in all lower-case), or you can sign up for an account via Jira - https://issues.scala-lang.org


--
Kevin Wright

gtalk / msn : kev.lee...@gmail.com
google+: http://gplus.to/thecoda
mail: kevin....@scalatechnology.com
vibe / skype: kev.lee.wright
quora: http://www.quora.com/Kevin-Wright
twitter: @thecoda

"My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger" ~ Dijkstra

Meredith Gregory

unread,
Jul 16, 2011, 2:16:14 AM7/16/11
to scala-i...@googlegroups.com
Dear All,

i have a specific use case that needs reflection that i would like to see supported by the package. The use case is the generic calculation of Zipper for (at least) regular types. For purposes of discussion a Zipper for a regular type, T, can thought of as a triple consisting of
  • a type, called Location, derived from T
  • a Navigator, built using the type Location
  • and a Mutator, built using the type Location 
All of the reflective magic is evinced in the Location. For a given regular type, T, the type Location[T] is merely a pair 
  • ( T, ∂T ) 
where ∂T is the differentiation of T using some variant of the McBride algorithm. The simplest implementation i can think of would demand the runtime reflection of T, denote it R(T), and calculate diff( R( T ) ). Then ∂T is the unreflection or dynamically loaded compile-time representation diff( R( T ) ). This suffices to implement Location[T].

You can see McBride's paper for a definition of regular types or the Ghani, et al, paper for the extension to a larger class of types. For background, i have implemented all of McBride's logic in Scala, here. Likewise a simple implementation of Zipper restricted to the type of tree is to be found here.

Best wishes,

--greg

Alex Cruise

unread,
Jul 25, 2011, 4:10:52 PM7/25/11
to scala-i...@googlegroups.com
Sorry to dig up a buried thread, but while we're on the subject, could we consider the possibility of providing a means of discovering, given a sealed class/trait, the implementations that were defined in the same file?

-0xe1a

Jorge Ortiz

unread,
Jul 26, 2011, 8:01:10 PM7/26/11
to scala-i...@googlegroups.com
Hi everyone,

I wanted to chime in with some thoughts about the proposed reflection library. I talked briefly about this with PaulP and others at Scalathon last weekend, and promised to write up my thoughts and send them here.

So far, it looks like the reflection library aims to make the compiler's information about symbols, trees, and types available at runtime. This sounds awesome and I can't wait to take advantage of it. However, I'd like to bring up an additional type of reflection that would also be a very welcome addition to the Scala: being able to reflect on compile time, at compile time.

What I mean is, if I have a class:

    case class Person(name: String, age: Int)

I should be able to get a handle of something like a "Field[Person, String]" for `name` and a "Field[Person, Int]" for `age`. This is similar to the Field types that Lift Record makes available, as well as to the scalaz.Lens types made available by the Lensed compiler plugin (https://github.com/gseitz/Lensed).

Having a way to a) enumerate and b) get a typed handle on to these fields types lets us do cool things with serialization for database storage, as well as build type-safe database queries. Our open source project Rogue (https://github.com/foursquare/rogue) takes advantage of Lift Record in this way.

The two disadvantages to using Lift Record are that it is verbose and it has a painful object allocation (and thus garbage collection) overhead. The pattern looks something like this:

  class Venue extends MongoRecord[Venue] {
    def meta = Venue
    def id = _id.value

    object _id extends ObjectIdField(this)
    object legacyid extends LongField(this) { override def name = "legid" }
    object userid extends LongField(this)
    object closed extends BooleanField(this)
    object tags extends MongoListField[Venue, String](this)
  }

  object Venue extends Venue with MongoMetaRecord[Venue]

This lets us access 'reflection' info at run time (name of the record, name of the fields, types of the fields) as well as at compile time (a record M is typed as Record[M], a field belonging to that record is typed as Field[M, T]). Unfortunately, this creates a lot of pointers and object allocations. Materializing these heavy Record objects, and collecting all the garbage they generate, are the two biggest limiting factors in the performance of our servers right now.

At the moment, my best idea to mitigate this overhead is with some combination of compiler plugin and code generation (a la Lensed, above). However, this has the drawbacks of requiring specialized knowledge about the compiler (which is hard to come by) and is brittle to Scala version upgrades. I would be _overjoyed_ if this functionality was available through a Scala reflection library.

Cheers,

--j

martin odersky

unread,
Jul 28, 2011, 1:31:57 PM7/28/11
to scala-i...@googlegroups.com
On Wed, Jul 27, 2011 at 2:01 AM, Jorge Ortiz <jorge...@gmail.com> wrote:
Hi everyone,

I wanted to chime in with some thoughts about the proposed reflection library. I talked briefly about this with PaulP and others at Scalathon last weekend, and promised to write up my thoughts and send them here.

So far, it looks like the reflection library aims to make the compiler's information about symbols, trees, and types available at runtime. This sounds awesome and I can't wait to take advantage of it. However, I'd like to bring up an additional type of reflection that would also be a very welcome addition to the Scala: being able to reflect on compile time, at compile time.

What I mean is, if I have a class:

    case class Person(name: String, age: Int)

I should be able to get a handle of something like a "Field[Person, String]" for `name` and a "Field[Person, Int]" for `age`. This is similar to the Field types that Lift Record makes available, as well as to the scalaz.Lens types made available by the Lensed compiler plugin (https://github.com/gseitz/Lensed).

Having a way to a) enumerate and b) get a typed handle on to these fields types lets us do cool things with serialization for database storage, as well as build type-safe database queries. Our open source project Rogue (https://github.com/foursquare/rogue) takes advantage of Lift Record in this way.

The two disadvantages to using Lift Record are that it is verbose and it has a painful object allocation (and thus garbage collection) overhead. The pattern looks something like this:

  class Venue extends MongoRecord[Venue] {
    def meta = Venue
    def id = _id.value

    object _id extends ObjectIdField(this)
    object legacyid extends LongField(this) { override def name = "legid" }
    object userid extends LongField(this)
    object closed extends BooleanField(this)
    object tags extends MongoListField[Venue, String](this)
  }

  object Venue extends Venue with MongoMetaRecord[Venue]

This lets us access 'reflection' info at run time (name of the record, name of the fields, types of the fields) as well as at compile time (a record M is typed as Record[M], a field belonging to that record is typed as Field[M, T]). Unfortunately, this creates a lot of pointers and object allocations. Materializing these heavy Record objects, and collecting all the garbage they generate, are the two biggest limiting factors in the performance of our servers right now.

At the moment, my best idea to mitigate this overhead is with some combination of compiler plugin and code generation (a la Lensed, above). However, this has the drawbacks of requiring specialized knowledge about the compiler (which is hard to come by) and is brittle to Scala version upgrades. I would be _overjoyed_ if this functionality was available through a Scala reflection library.

I think Manifests are probably the closest to what you want. But I guess what you are asking for is really a more "reflective" type system for Scala. That would extend the mandate of the current project by a pretty large margin.

Cheers

-- Martin

Jorge Ortiz

unread,
Jul 28, 2011, 5:16:42 PM7/28/11
to scala-i...@googlegroups.com
Yes, Manifests, but for fields as well as classes.

Rafael de F. Ferreira

unread,
Jul 28, 2011, 5:34:28 PM7/28/11
to scala-i...@googlegroups.com
+1000 to Jorge's proposal. Even Java's JPA is moving in this
direction, using the annotation processing tool to generate metamodel
classes[1]. Some time ago I blogged about this as my number one
request for Scala [2]. And the first time I saw a proposal for
something similar was in 2006, in the context of Java and mocking
libraries for unit testing, where the author called it "static
reflection"[3].

From my understanding, this feature would not require changes to
Scala's type system per se, rather it would simply require some more
syntethic code to be generated for case classes (perhaps guided by
annotations).

[1] http://docs.jboss.org/hibernate/entitymanager/3.5/reference/en/html/metamodel.html
[2] http://blog.rafaelferreira.net/2010/06/where-scala-left-me-wanting.html
[3] http://www.lixo.org/archives/2006/09/25/java-feature-request-static-reflection/


--
Rafael de F. Ferreira.
http://www.rafaelferreira.net/

There is another major disadvantage, in that this couples domain model
classes to Lift's records. We might need to refer to the members of a
class for other purposes (such as to configure serialization or in
mock-based unit tests) and would need to rely on these other libraries
being written in terms of lift's records. We might want to change the
persistence mechanism but keep the domain logic. When trying to do
domain modeling it's best to avoid dependencies like the plague.

Jorge's proposal would alleviate all of those problems.

Joni Freeman

unread,
Sep 12, 2011, 6:47:46 AM9/12/11
to scala-i...@googlegroups.com
Hi,

Will it be possible to resolve type aliases with the forthcoming
reflection API? It looks like it is not currently possible to parse that
from ScalaSig annotation if the type alias is defined in some other
compilation unit.

object CU1 {
type StringToLong = Map[String, Long]
}

object CU2 {
import CU1._

case class Foo(x: StringToLong)
}

Parsing Foo.<init> with scalap gives 'x' as an ExternalSymbol. It could
be that I'm missing something but I didn't find any way to actually
resolve that ExternalSymbol("StringToLong", _, _) here means type
Map[String, Long].

Cheers Joni

Reply all
Reply to author
Forward
0 new messages