- Should type aliases be preserved in reflections? Consider something like predef.String for example - which will map to a java.lang.String for a JVM target, but not for MSIL. This is especially noteworthy when such an alias appears in a path-dependent type. My current feeling is that aliases should be preserved as-is, with a separate method to canonicalise types, this isn't so different from working with symlinks in file systems.
- Abstract type members vs type parameters. An interesting challenge, especially when dealing with aliases that translate between the two schemes
- Higher-Kinded types and type lambdas - enough said!- Existential types and bounds - Upper and lower bounds in a wildcardType are currently only available via the toString method
- Reification - When converting from Java's reflection types, it would be nice if such conversion could take a type param, allowing any erasure to be reversed. This offers the potential for a createInstance method with a statically safe return value
> - get the Scala runtime class from an instance,
> - obtain Scala members from Scala classes, which carry their original Scala
> types,
> - dereference Scala fields, and invoke Scala methods,
> - query Scala types of these members, for subtype relationships and others.
> - the type descriptor in a manifest should be a reflect.Type.
> - the tree lifted in a Code[T] expression should be a reflect.Tree.
Imho there is some "enclosing" relationship missing.
For instance it is currently not possible to discover B from A
reflectively here:
object A { object B }
> The idea is to pack most logic into two packages:
>
> reflect.internal contains the implementations
> reflect.api contains the public interfaces.
>
> (Question: Better name for reflect.api? reflect.public would be nice but
> that makes it inaccessible from Java).
I would prefer just "scala.reflect" for the interface. Additionally I
wonder if it would make sense to have the implementation in a
completely different package, and exclude it from any "binary
compatibility" guarantees.
This would make it much easier to keep the code in sync with the one
in the compiler, while still providing access to those who really need
"more" features.
I think it could be a bit like the com.sum namespace in Java for
instance, which is used internally but no guarantees are made.
Independently of that it would be nice to have some sort of @beta
annotation ("This class/method/... might change in the future" like
Google has) so it would be possible to ship the reflection code as a
separate jar file with the next release to get early feedback.
(@deprecated and @migration cover the end of an API's lifecycle, it
owuld be nice to have something for the start of an API, too.)
Reflection is used for very different things and some people do weird
things with it, so it would be nice to get as much feedback as
possible. Reflection is hard to get right (look at Java, things are
broken and never got fixed there), so before setting things in stone
it would be nice to get it in the hands of as much people as possible,
not just those running trunk.
Thanks and bye!
Simon
Imho there is some "enclosing" relationship missing.
For instance it is currently not possible to discover B from A reflectively here:
object A { object B }
trait OuterTrait { abstract class InnerAbstract(i: Int) } object OuterObject extends OuterTrait { case class InnerConcrete(i: Int) extends InnerAbstract(i) } val elephant = OuterObject.InnerConcrete(123)
val burrow: String = serialize(elephant) // This isn't too hard, except I can't think of any good general way to serialize the $outer
val elephant = deserialize[OuterObject.InnerConcrete](burrow) // And that is where my claim falls down.
-0xe1a
Another relevant use case would be to see if the library enables (and
make it easy, ideally) to write mapping libraries akin to Rogue or
Lift's records without having to resort to Field types and query DSLs
(versus ordinary val/var's and for comprehensions, similar to LINQ).
--
Rafael de F. Ferreira.
http://www.rafaelferreira.net/
Hi,
- get the Scala runtime class from an instance,
- obtain Scala members from Scala classes, which carry their original Scala
types,
- dereference Scala fields, and invoke Scala methods,
- query Scala types of these members, for subtype relationships and others.- the type descriptor in a manifest should be a reflect.Type.
- the tree lifted in a Code[T] expression should be a reflect.Tree.
Imho there is some "enclosing" relationship missing.
For instance it is currently not possible to discover B from A reflectively here:
object A { object B }
I would prefer just "scala.reflect" for the interface.
The idea is to pack most logic into two packages:
reflect.internal contains the implementations
reflect.api contains the public interfaces.
(Question: Better name for reflect.api? reflect.public would be nice but
that makes it inaccessible from Java).
Additionally I wonder if it would make sense to have the implementation in a completely different package, and exclude it from any "binary compatibility" guarantees.
This would make it much easier to keep the code in sync with the one in the compiler, while still providing access to those who really need "more" features.
I think it could be a bit like the com.sum namespace in Java for instance, which is used internally but no guarantees are made.
Independently of that it would be nice to have some sort of @beta annotation ("This class/method/... might change in the future" like Google has) so it would be possible to ship the reflection code as a separate jar file with the next release to get early feedback.
(@deprecated and @migration cover the end of an API's lifecycle, it owuld be nice to have something for the start of an API, too.)
Perhaps checking whether the library allows for the scenarios
described in Bracha's Mirrors paper would be an interesting benchmark
for the overall design.
Another relevant use case would be to see if the library enables (and
make it easy, ideally) to write mapping libraries akin to Rogue or
Lift's records without having to resort to Field types and query DSLs
(versus ordinary val/var's and for comprehensions, similar to LINQ).
--
one of my concerns with the cake pattern is ensuring the layered types are referencable outside the universe they are contained within.
Although x.T forSome { val x : Universe } or val z = ImplObj.T can work, I know that this typing can be confusing for those new to scala.
As long as the API package has stable identifiers to the types, I think the approach is great.
Dependent method types enabled by default would help enormously ...
Cheers,
Miles
--
Miles Sabin
tel: +44 7813 944 528
gtalk: mi...@milessabin.com
skype: milessabin
http://www.chuusai.com/
http://twitter.com/milessabin
Some applications use reflection during initialization but not afterwards. For those it will be important to not leak memory for reflection data (i.e. Symbol or Type instances) that is no longer used.
Are there other pieces of the compiler we might want to expose in the future? The ICode layer could be a nice basis for a bytecode manipulation library for example.
Cheers,
- Tiark
No specific example in mind. But it would be nice to be able to write
code such as an object browser that can work with a live process (via
RTTI) or with a memory dump (via specific implementations of the
reflection interfaces), or a class browser that work in-process or
remotely. IMO Mirrors are just an application of traditional software
design principles to the problem of reflection.
>
>>
>> Another relevant use case would be to see if the library enables (and
>> make it easy, ideally) to write mapping libraries akin to Rogue or
>> Lift's records without having to resort to Field types and query DSLs
>> (versus ordinary val/var's and for comprehensions, similar to LINQ).
>>
> Yes, agreed. We plan to have the library out in an early milestone, so that
> people can experiment with it before it gets baked in a release.
>
Another important case, neglected in java.lang.reflect, is the
reflection of parameter names. This is such a sore spot for Java that
a project was created to hack around it:
http://paranamer.codehaus.org/.
Cheers.
StandardDefinitions:
def RootPackage: Symbol
def RootClass: Symbol
// Is the root package useful reflectively? The only purpose it
// serves that I'm aware of is at the source level, when you need
// to disambiguate "import foo" from "import _root_.foo".
// Reflectively it just adds a layer of indirection to RootClass.
def EmptyPackage: Symbol
def EmptyPackageClass: Symbol
def ScalaPackage: Symbol
def ScalaPackageClass: Symbol
// Having "packages" and "package classes" exposed adds semantic
// confusion for no real gain, because "package as term" and "package as type"
// are fictions we create to fit with the scala model. It is confusing
// enough inside the compiler.
//
// They have the same members:
// scala> definitions.ScalaPackage.tpe.members.toSet == definitions.ScalaPackageClass.tpe.members.toSet
// res0: Boolean = true
//
// All of them are owned by the package class (or the "package object class"
// for those defined there, for even more confusion.)
//
// I think we should expose one consistent "package" construct at this level,
// and people can dig in with companionSymbol or similar if they really need to
// draw distinctions.
def signature(tp: Type): String
// 1) "signature" is too general a name, it can mean too many things.
// 2) In general the type isn't enough to generate the signature, you
// also need the symbol for which the signature is being constructed.
// (See usages in Erasure's javaSig.)
/** Is symbol one of the value classes? */
def isValueClass(sym: Symbol): Boolean
/** Is symbol one of the numeric value classes? */
def isNumericValueClass(sym: Symbol): Boolean
// These are only here because that's where they evolved, but it's inconsistent.
// They should be members of AbsSymbol, where far more specific tests
// already exist (i.e. isEmptyPackage and isEmptyPackageClass are testing
// for a single symbol.)
Names:
def newTermName(cs: Array[Char], offset: Int, len: Int): TermName
def newTermName(cs: Array[Byte], offset: Int, len: Int): TermName
def newTermName(s: String): TermName
def newTypeName(cs: Array[Char], offset: Int, len: Int): TypeName
def newTypeName(cs: Array[Byte], offset: Int, len: Int): TypeName
def newTypeName(s: String): TypeName
// I don't see the need for 2/3 of these signatures for reflection.
// Just these:
def newTermName(s: String): TermName
def newTypeName(s: String): TypeName
object Modifier extends Enumeration {
val `protected`, `private`, `override`, `abstract`, `final`,
`sealed`, `implicit`, `lazy`, `case`, `trait`,
deferred, interface, mutable, parameter, covariant, contravariant,
preSuper, abstractOverride, local, java, static, caseAccessor,
defaultParameter, defaultInit, paramAccessor, bynameParameter = Value
}
As a code review it's going to get too long, so let's give it a better chance by having me send some shorter pieces to the list where I can maybe draw in some other opinions too, since I doubt many of you surf fisheye for comments. I could really use some kind of futuristic brain dump device right about now.
StandardDefinitions:
def RootPackage: Symbol
def RootClass: Symbol
// Is the root package useful reflectively? The only purpose it
// serves that I'm aware of is at the source level, when you need
// to disambiguate "import foo" from "import _root_.foo".
// Reflectively it just adds a layer of indirection to RootClass.
def EmptyPackage: Symbol
def EmptyPackageClass: Symbol
def ScalaPackage: Symbol
def ScalaPackageClass: Symbol
// Having "packages" and "package classes" exposed adds semantic
// confusion for no real gain, because "package as term" and "package as type"
// are fictions we create to fit with the scala model. It is confusing
// enough inside the compiler.
//
// They have the same members:
// scala> definitions.ScalaPackage.tpe.members.toSet == definitions.ScalaPackageClass.tpe.members.toSet
// res0: Boolean = true
//
// All of them are owned by the package class (or the "package object class"
// for those defined there, for even more confusion.)
//
// I think we should expose one consistent "package" construct at this level,
// and people can dig in with companionSymbol or similar if they really need to
// draw distinctions.
def signature(tp: Type): String
// 1) "signature" is too general a name, it can mean too many things.
// 2) In general the type isn't enough to generate the signature, you
// also need the symbol for which the signature is being constructed.
// (See usages in Erasure's javaSig.)
/** Is symbol one of the value classes? */
def isValueClass(sym: Symbol): Boolean
/** Is symbol one of the numeric value classes? */
def isNumericValueClass(sym: Symbol): Boolean
// These are only here because that's where they evolved, but it's inconsistent.
// They should be members of AbsSymbol, where far more specific tests
// already exist (i.e. isEmptyPackage and isEmptyPackageClass are testing
// for a single symbol.)
Names:
def newTermName(cs: Array[Char], offset: Int, len: Int): TermName
def newTermName(cs: Array[Byte], offset: Int, len: Int): TermName
def newTermName(s: String): TermName
def newTypeName(cs: Array[Char], offset: Int, len: Int): TypeName
def newTypeName(cs: Array[Byte], offset: Int, len: Int): TypeName
def newTypeName(s: String): TypeName
// I don't see the need for 2/3 of these signatures for reflection.
// Just these:
def newTermName(s: String): TermName
def newTypeName(s: String): TypeName
I'm not looking to modify the compiler's model, but to limit as much as
possible how many details we expose (which may occasionally lead me to
suggest changes in the model, but as few and small as possible.) It has
taken me years to wrap my head around all this stuff. Every concept we
expose directly makes all the other ones harder to figure out due to the
dark side of metcalfe's law. In the above discussed example it can mean
as little as limiting that initial chunk of symbols of Definitions to these:
def RootClass: Symbol
def EmptyPackageClass: Symbol
def ScalaPackageClass: Symbol
It's not like EmptyPackage and ScalaPackage are more than a method call
away.
Another example is access. Although it's a lot better than it used to
be, I think access is still way too complicated. I would propose doing
something like the following:
1) Stop exposing all the methods relating to access, in favor of
2) def access: Access
And then something like this, which is off the top of my head and does
not cover every imaginable base (especially the joy of all the boundary
conditions with java where we have to change the rules) but would be an
order of magnitude more usable.
class Access {
def isPublic: Boolean
def isObjectPrivate: Boolean
def isAccessibleFromSubclasses: Boolean
def isAccessibleFrom(tpe: Type): Boolean
def accessBoundary: Boolean
}
> Yes, agreed. Do we do the change in the compiler as well then?
Yes, although there is a larger cleanup in demand with respect to what
is in Definitions and what is on symbols and occasionally types.
Oops, that last one should be "Symbol".
They aren't the same methods. No such methods exist on Symbols. To the extent that they exist at all, they are in other places and buried in implementation details. The indirection is useful because access is ridiculously complicated (or at least I assume so based on the dozens of open bugs related to it and the fact that I keep finding new ones) yet what people actually want to know is most of the time extremely simple. "Can I access that thing?" That's covers 90% of access needs. I don't think people should have to have any idea about the information presented in the following comment unless they are unflinching masochists who seek it out.
/**
* Set when symbol has a modifier of the form private[X], NoSymbol otherwise.
*
* Access level encoding: there are three scala flags (PRIVATE, PROTECTED,
* and LOCAL) which combine with value privateWithin (the "foo" in private[foo])
* to define from where an entity can be accessed. The meanings are as follows:
*
* PRIVATE access restricted to class only.
* PROTECTED access restricted to class and subclasses only.
* LOCAL can only be set in conjunction with PRIVATE or PROTECTED.
* Further restricts access to the same object instance.
*
* In addition, privateWithin can be used to set a visibility barrier.
* When set, everything contained in the named enclosing package or class
* has access. It is incompatible with PRIVATE or LOCAL, but is additive
* with PROTECTED (i.e. if either the flags or privateWithin allow access,
* then it is allowed.)
*
* The java access levels translate as follows:
*
* java private: hasFlag(PRIVATE) && !hasAccessBoundary
* java package: !hasFlag(PRIVATE | PROTECTED) && (privateWithin == enclosing package)
* java protected: hasFlag(PROTECTED) && (privateWithin == enclosing package)
* java public: !hasFlag(PRIVATE | PROTECTED) && !hasAccessBoundary
*/
If the current API shipped tomorrow (which I realize is not proposed)
there would be an immediate need for another reflection library to wrap
and simplify the first reflection library. That's not necessarily the
end of the world, but it'd be nice if it were immediately usable out of
the box by someone already familiar with reflection and the scala type
system (at the language level, not the implementation level) without
major intellectual dislocation.
immediately usable out of the box by someone already familiar with reflection
and the scala type system (at the language level, not the implementation level)
Concretely and not too ambitiously, I want the methods on manifest to
work correctly. <:<, =:=, typeArguments, etc. And then whatever the
working analogue of the no-longer-working methods like
Class#getGenericReturnType so we can discover stuff like Option[Int]
again. Conceptually, "method manifests".
I would be fine with (even in favor of) picking an extremely sparse api
which let us do the above and shipping as soon as possible. This seems
more likely to evolve toward good outcomes than exposing a lot of things
up front before we have much idea how they will work out in the world
outside compiler hackers.
This is a good point. I still struggle constantly with stupid annoying
mismatches between e.g. global.Symbol and some.other.global.Symbol which
is actually the same global but I didn't manage to swaddle the singleton
in enough blankets to survive the trip. And look at how much trouble
beginners have understanding the parser combinator architecture: I
understand because I remember being equally mystified.
On 7/11/11 5:49 AM, Miles Sabin wrote:
> Dependent method types enabled by default would help enormously ...
Also a good point. I don't know how robust they are: probably a bad
idea to count on them much, but maybe we should be talking about
promoting them up somewhere above "-Yworlds-longest-option."
This is a good point. I still struggle constantly with stupid annoyingmismatches between e.g. global.Symbol and some.other.global.Symbol which
is actually the same global but I didn't manage to swaddle the singleton
in enough blankets to survive the trip.
And look at how much trouble beginners have understanding the parser combinator architecture:
I understand because I remember being equally mystified.
On 7/11/11 5:49 AM, Miles Sabin wrote:Also a good point. I don't know how robust they are: probably a bad
> Dependent method types enabled by default would help enormously ...
idea to count on them much, but maybe we should be talking about
promoting them up somewhere above "-Yworlds-longest-option."
Hi everyone,I wanted to chime in with some thoughts about the proposed reflection library. I talked briefly about this with PaulP and others at Scalathon last weekend, and promised to write up my thoughts and send them here.So far, it looks like the reflection library aims to make the compiler's information about symbols, trees, and types available at runtime. This sounds awesome and I can't wait to take advantage of it. However, I'd like to bring up an additional type of reflection that would also be a very welcome addition to the Scala: being able to reflect on compile time, at compile time.
What I mean is, if I have a class:case class Person(name: String, age: Int)I should be able to get a handle of something like a "Field[Person, String]" for `name` and a "Field[Person, Int]" for `age`. This is similar to the Field types that Lift Record makes available, as well as to the scalaz.Lens types made available by the Lensed compiler plugin (https://github.com/gseitz/Lensed).Having a way to a) enumerate and b) get a typed handle on to these fields types lets us do cool things with serialization for database storage, as well as build type-safe database queries. Our open source project Rogue (https://github.com/foursquare/rogue) takes advantage of Lift Record in this way.The two disadvantages to using Lift Record are that it is verbose and it has a painful object allocation (and thus garbage collection) overhead. The pattern looks something like this:class Venue extends MongoRecord[Venue] {def meta = Venuedef id = _id.valueobject _id extends ObjectIdField(this)object legacyid extends LongField(this) { override def name = "legid" }object userid extends LongField(this)object closed extends BooleanField(this)object tags extends MongoListField[Venue, String](this)}object Venue extends Venue with MongoMetaRecord[Venue]This lets us access 'reflection' info at run time (name of the record, name of the fields, types of the fields) as well as at compile time (a record M is typed as Record[M], a field belonging to that record is typed as Field[M, T]). Unfortunately, this creates a lot of pointers and object allocations. Materializing these heavy Record objects, and collecting all the garbage they generate, are the two biggest limiting factors in the performance of our servers right now.At the moment, my best idea to mitigate this overhead is with some combination of compiler plugin and code generation (a la Lensed, above). However, this has the drawbacks of requiring specialized knowledge about the compiler (which is hard to come by) and is brittle to Scala version upgrades. I would be _overjoyed_ if this functionality was available through a Scala reflection library.
From my understanding, this feature would not require changes to
Scala's type system per se, rather it would simply require some more
syntethic code to be generated for case classes (perhaps guided by
annotations).
[1] http://docs.jboss.org/hibernate/entitymanager/3.5/reference/en/html/metamodel.html
[2] http://blog.rafaelferreira.net/2010/06/where-scala-left-me-wanting.html
[3] http://www.lixo.org/archives/2006/09/25/java-feature-request-static-reflection/
--
Rafael de F. Ferreira.
http://www.rafaelferreira.net/
There is another major disadvantage, in that this couples domain model
classes to Lift's records. We might need to refer to the members of a
class for other purposes (such as to configure serialization or in
mock-based unit tests) and would need to rely on these other libraries
being written in terms of lift's records. We might want to change the
persistence mechanism but keep the domain logic. When trying to do
domain modeling it's best to avoid dependencies like the plague.
Jorge's proposal would alleviate all of those problems.
Will it be possible to resolve type aliases with the forthcoming
reflection API? It looks like it is not currently possible to parse that
from ScalaSig annotation if the type alias is defined in some other
compilation unit.
object CU1 {
type StringToLong = Map[String, Long]
}
object CU2 {
import CU1._
case class Foo(x: StringToLong)
}
Parsing Foo.<init> with scalap gives 'x' as an ExternalSymbol. It could
be that I'm missing something but I didn't find any way to actually
resolve that ExternalSymbol("StringToLong", _, _) here means type
Map[String, Long].
Cheers Joni