Lets remove the dummy for now. If this costs you too much, I think, we can sacrifice a few broken nightlies, right?
On Thu, Apr 26, 2012 at 11:56 AM, Eugene Burmako <eugene....@epfl.ch> wrote:There's no way for me to answer that until I've pulled this thread
> Toolboxes are supposed to be an integral part of the new reflection API, so
> we have to add them somewhere. What would be the most appropriate location?
longer. If they're an "integral" part of the API then we are in
trouble, because they apparently require the compiler. Fancy
applications like reification and macros cannot lead us to impose a 15
MB unnecessary dependency on the great majority of people who just
want to be able to tell what the signature of that method is.
Just to review, the biggest reason a reflection library is important
is that you cannot retrieve accurate types via java generic signatures
because of restrictions (implicit and explicit) on their form. This
is felt especially keenly because until 2.8.x we provided accurate
signatures for primitive types, but due to the intolerance of the rest
of the world to nonconformant signatures we stopped doing that in 2.9,
after people had come to rely on it.
The direct remedy for that situation involves a few interfaces,
analogous to java.lang.reflect.Method, java.lang.Class, etc. which
read the scala signature and provide accurate information. Everything
else - all of it - should be secondary to providing this. And the
part I described requires only a tiny bit of new code.
The ambitious project - reification, expression trees, macros,
whatever else - is nice, but it cannot be allowed to impede the simple
delivery of accurate type information at runtime, or we have failed.
I almost want to branch off and do that little bit from scratch and
without dependencies rather than attempting to untangle things any
further.
Can we have mkToolBox along with mkMirror? Or do I put mkToolBox in scala.reflect.runtime?
Reifying types would be hard, because that would require the reifier to generate symbol creation code for all the symbols in the reifee. I tried that before and failed. Is there a way to do that easily?
Also, DynamicProxy https://github.com/scala/scala/blob/master/src/library/scala/reflect/DynamicProxy.scala currently uses toolboxes to resolve default parameters and infer type arguments. I believe that would need to be changed as well.
Yep, that's what I meant when mentioning scala.reflect.runtime, which is a package in scala-compiler. We definitely can move toolboxes into compiler.
Personally I'd expect reification to preserve everything meaningful
about the code, s.t. reification result could be typechecked again and
produce the same result. In the same vein, if I declare a class in a
reifee, I'd like to get all the info about its type.
However I am not sure about Martin's use case with typed reifications.
Maybe it would be enough to reify packed types. Maybe we dont need to
reify everything. Martin, could you, please, elaborate?
On 27 Apr 2012, at 00:31, Paul Phillips <pa...@improving.org> wrote:
> On Thu, Apr 26, 2012 at 2:22 PM, Eugene Burmako <eugene....@epfl.ch> wrote:
>> The main problem is with reification of types that refer to locally defined
>> stuff.
>>
>> For example. How would you reify
>>
>> class C
>> object C {
>> def foo: C = ???
>> }
>
> Oh yeah, I meant to email you about this earlier because I saw that
> question in the code.
>
> You've discovered "the avoidance problem", or at least a variation of
> it. It's similar to the question of what type you infer for this:
>
> def foo = { class C ; new C }
>
> The answer must depend on what it means to be reified and what if any
> contextual requirements are attached to reified types. In the type
> inference example you pick the least supertype of C which isn't C.
> Depending on what you want out of the reified type, you could do
> something like that, or if you try to preserve it you have to reify
> more stuff.
>
> Note: I only dabble in theory.
We have two usages of toolboxes in library:
1) DynamicProxy,
2) Expr.eval (when used outside of reify).
The former needs toolboxes to resolve default parameters and type
arguments. The latter I've seen a lot in macros, when people eval
stuff in macro expansions (e.g. see Heiko's example here:
https://issues.scala-lang.org/browse/SI-5713).
I think both uses are quite important, so I suggest that we do leave
toolboxes in mirrors, and instantiate them in the same way we
instantiate mirrors in library: try { <reflective construction> }
catch { throw new UnsupportedOperationException(...) }. So, scala-
library will reflectively call into scala-reflect, when it needs a
mirror, and scala-reflect will reflectively call into scala-internals,
when it needs a toolbox.
As we've seen, these reflection links are brittle, but at least we
don't add conceptually new brittleness with this approach.
On Apr 27, 12:58 am, Paul Phillips <pa...@improving.org> wrote:
> I'm kind of stuck; you can have free reign.
>
>
>
> On Thu, Apr 26, 2012 at 2:52 PM, Eugene Burmako <eugene.burm...@epfl.ch> wrote:
> > Paul, how would you prefer to do it? I will have time tomorrow to perform
> > the move, but I'd hate to have us produce conflicting changesets. Could you
> > push your results before going to sleep?
If the issue is "asSeenFrom" and such, we can limit the granularity of
members as much as necessary. I am the good, the perfect is not my
enemy.
It's like groundhog day around here sometimes, because we
do this quite a lot: throw ourselves into solving problems most people
didn't even know they had, while studiously ignoring the problems they
can be observed to have.
Not intending to force anything onto you, just to provide some info.scala-reflect (copied from scala.reflect.runtime and scala.reflect.internal, with some compiler-specific cruft removed) is 2059k packed. Of those ~250k is runtime, ~200k is symbols, ~100k is trees, ~700k is types, ~300k is definitions, ~100k is importers. If you're interested I can provide more detailed breakdown later.Maybe it'll be feasible to add some stuff to scala-library (~500k) and have basic reflection services working without imposing scala-reflect at the user. Would this be interesting to you Paul? Would it be a satisfying price to pay?
On Fri, Apr 27, 2012 at 8:54 PM, Paul Phillips <pa...@improving.org> wrote:
It's like groundhog day around here sometimes, because we
do this quite a lot: throw ourselves into solving problems most people
didn't even know they had, while studiously ignoring the problems they
can be observed to have.+1, FWIWI think both have their place (I know you're not arguing against that)reflection+macros as new, researchy, even experimental features -- let's call those the pure breed shiny race poneys
I see the old reflection as the robust workhorse that pulls the plough, even though all it does is trudge back and forth through the mudblind to all the tractors racing past it, but people still depend on it for their potato harvest
more technically, it would be a regression to impose more requirements on usages of reflection (classloader issues, size increases, slow downs,...)
On Fri, Apr 27, 2012 at 10:14 AM, martin odersky <martin....@epfl.ch> wrote:I'm all for that. However my definition of "doing it right" is "doing
> Seriously, I think that would just perpetuate the piecework we did so far in
> scalap, manifests and so on. We should do it right or not do it.
it usefully", not "doing it with maximum possibility fidelity
regardless of what tradeoffs must be made to get there." I can say
without any doubt that there is a very useful improvement to be made
on what one can learn from
x.getClass.getMethods.map(_.toGenericString) which does not require
dragging many additional megabytes to obtain. And it looks to me like
2.10 is shaping up to exclude that improvement.
> I also have not yet quite understood what you propose:I'm attempting to communicate it sufficiently abstractly that the
trees don't blur out the forest. What I propose is that we don't have
to destroy the village to save it. I propose to provide a superior
version of java reflection which takes into account the information in
scala signatures, without new dependencies, and to make this part of
the library. Failing that, I intend to ship something like that
personally. If I'd ever thought we would fail to provide that in 2.10
I'd have done it long ago.
but reflection includes Typeable, which depends on reify (last time I checked) , which depends on macro's
The previous approach to manifests makes me cringe on how partial and wrong it was.
I think that's the distinction we need to make. The new reflection started encroaching on the old reflection and imposing new constraints that are just infeasible for us to enforce.If we had 3 years of people not using Manifests *or* people were always including the compiler on the classpath when running scala, maybe we'd have been fine.The reality is, that's not the case and it's very breaking for most people.
I think the big mistake with those was releasing anything besides ClassManifest, which is the 99% use case in a lot of my code.
I think you're right for rich reflection. However, when most of the time I need just a ClassManifest and I've been writing Manifest, maybe we just deprecate the advanced uses in Manifest, and point people at TypeTags for that functionality. Was ClassManifest mis-designed?
Maybe it'd be possible to add several hundred kilobytes to the stdlib
and get limited reflection capabilities (say, no importers or
definitions). As I see it, dummy might actually be not a complete
dummy, but provide services of its own. Would it be useful to pursue
this direction or even +300-500k to stdlib is completely unacceptable?
- default arguments don't belong in API
- as mentioned before, boolean arguments to methods are bad; but
boolean arguments with defaults are worse
- as mentioned before, boolean arguments to methods are bad; but
boolean arguments with defaults are worse
I disagree. If an argument is a boolean it is a boolean. Just remember to pass a named parameter, that's all.
process(x)process(x, extendedChecks = true)
I don't think there's a clearer way to express things.
process(x)process(x, extendedChecks = true)
Dear Martin,
I'd prefer the mirror in reflect.jar strictly extending the mirror in library.jar, not as 2 implementations of an abstract Universe. Close to virtual classes.
// in library
package reflect
abstract class Universe {
type T <: TImpl
trait TImpl {
}
def T(): T
}
/*internal*/ class UniverseImpl extends Universe {
// knot tied
}
// in reflect
package reflect.internal
abstract class Universe extends reflect.Universe {
type T <: TImpl
trait TImpl extends super.TImpl {
}
}
/*internal*/ class UniverseImpl extends Universe {
// knot tied
}
I think quite a lot of code should be the same and thus ideally not repeated in reflect.jar. This is of course an implementation detail. More important is that I like to see it as essentially one mirror, which is simply extended by the functionality in reflect.jar.
Regarding the boundaries (just repeating other people actually):
library.jar - reflection can everything what java can but in a scala way (scala can see itself as scala, up to what java can say about java)
reflect.jar - reflecting about the code (as Paul wrote), "advanced" type operations
compiler.jar - runtime compilation/macros
On 28.04.2012 15:41, martin odersky wrote:Hi Martin,
Regarding the boundaries (just repeating other people actually):
library.jar - reflection can everything what java can but in a scala
way (scala can see itself as scala, up to what java can say about java)
reflect.jar - reflecting about the code (as Paul wrote), "advanced"
type operations
Unfortunately it does not work that way. To do anything of consequence you need "advanced" type operations. They are not optional. As a good example, take invoke. In the case of overloading, to select the right method for a list of arguments, you need about all the mechanisms the Scala type system provides.
I'm wondering, what if I didn't need the reflection library to automatically resolve overloads for me? Would I still have to mess with additional JAR dependencies just because of this advanced feature that I might not ever use?
I would be perfectly happy for the Scala equivalent of Class.getMethods() to return separate Method references for each of the overloaded variants, just like Java does. Then my application logic can worry about which variant to call (if it needs to worry about it at all).
I'd like to give the reflection libraries a go with that redesign. Paul, Eugene, can you commit any in-flight changes so that we don't duplicate work?
How will the ops scale to arbitrary mirrors?
I agree, and that's what code review needs to guard against. For the same reason I would be very careful with booleans in public APIs. But in an internal project it is perfectly defensible to have booleans that are passed using named arguments.
In any case, Scala lived without reflection for 8 years. I am a bit surprised that people now find the burden of an additional jar so high. If people could do without reflection for so long, that's good evidence for me that the majority will be able to continue without reflection, so it can and should be in its own module.
In a language with good enum support (like Java, ironically), enums are probably a better option in almost every case than a boolean parameter.
In a language with good enum support (like Java, ironically), enums are probably a better option in almost every case than a boolean parameter.
I agree. Hopefully there will be a good solution for enums in a future Scala release. Maybe macros will help in Scala? (:-D)
Scala and other enlightened languages don't need to worry about that, of course.
another use case for the dearly missed style checker plugin.
On Mon, Apr 30, 2012 at 02:50, Ismael Juma <ism...@juma.me.uk> wrote:
On Mon, Apr 30, 2012 at 1:40 AM, David Hall <dl...@cs.berkeley.edu> wrote:
Scala and other enlightened languages don't need to worry about that, of course.I think this would have been true if named parameters were required (or could be made required in some situations). I thought we prefer for the compiler to help us when it's easy to do so. :)Best,Ismael