--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Some counterexamples:
1) What if a macro expands to something which introduces a dependency? We should probably track these dependencies as well.
2) What if a macro is untyped, i.e. its arguments aren't supposed to typecheck without the macro being expanded?
3) What if a macro has untyped return type, i.e. its usages won't typecheck if we treat a macro as an opaque def?
4) What about c.introduceMember?
"For type macros typer would call macro implementation and would generate trees (and types) but it would store results of macro application in an attachment and perform macro call expansion after typer." How would then the users of the newly generated trees be able to typecheck?
Just read up on it ... we should really drop it for 2.11. There is no value derived from it and it is completely inconsistent with other definitions of “constant”, e. g. for annotations.
Just commenting on the constant inlining: Is there any reason why we emulate Java behaviour in the first place? As far as I know, it is featured heavily in JavaPuzzlers and regarded as a plain mistake by the Java designers itself.
One example that comes to my mind is pattern matcher and it's exhaustivity checking. You could argue that we should use singleton types for that but they do not survive erasure. I don't know if we need to know about a constant after erasure, though.
Do you like generating switch statements? The pattern matcher can't unless the patterns have constant types.
I'm not sure we are speaking about the same thing.
I'm neither talking about removing ConstantType nor about removing constant folding. See the compilation results. Apart from what I think is a bug, compilation worked successfully.
--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
You are talking about replace an identifier with the literal to which it refers during the compilation, right?
Personally, I'm not sure how much this would impact runtime speed, and it's perfectly possible that JVM optimizes it away, but, speaking as someone who doesn't write a single literal in his code except for 0 and 1 for the most part, I'm not a fan of removing them. I wonder what Rex Kerr thinks about this.
1) Fair enough.
2) In short, untyped macros are macros, which have arguments that might be untypeable. You can find a motivating use case and some information here: http://docs.scala-lang.org/overviews/macros/untypedmacros.html.
3) You can say that a macro returns Any if you don't know the type of expansion in advance (e.g. if that's an anonymous type generated on the fly). However callsites of the macro won't be restricted to Any, but will be able to use whatever type the expansion has. E.g. if a macro, which has return type Any, expands into "new { def x = 2 }", then the callsite can call the method x of the result: http://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros.
4) Yes, it's described in a scaladoc: https://github.com/scalamacros/kepler/blob/bafebe1c161f8db0be758c30fe5cc51082a56427/src/reflect/scala/reflect/macros/Synthetics.scala.
5) That is correct.6) And, in general, how would macro expansions be typechecked? It wouldn't be enough to have the transformer of the macroexpand phase to subclass TypingTransformer. From what I remember about my experiences with patmat, localTyper can't do full-fledged typechecking (for one, it cannot correctly perform implicit search).
5) I also have doubts that putting the result of a type macro
expansion into an annotation is going to work. As shown by SI-6187,
carrying trees between phases using attachments is easy to get wrong.
7) Btw why do you dislike the idea of storing original trees of a
macro expansion in an attachment? There's already a mechanism in place
for that
https://github.com/scalamacros/kepler/blob/466fc670a38836dfb81d75f72d46ddcaa12bc3bb/src/reflect/scala/reflect/internal/StdAttachments.scala#L36.
The payload of MacroExpansionAttachment is the unmodified original
tree, so there's a guarantee that it encompasses all the dependencies
that the user has writter.
On 22 February 2013 04:19, Eugene Burmako <eugene....@epfl.ch> wrote:
1) Fair enough.
2) In short, untyped macros are macros, which have arguments that might be untypeable. You can find a motivating use case and some information here: http://docs.scala-lang.org/overviews/macros/untypedmacros.html.
Thanks for the link. I still don't understand if untyped macro means simply tree transformer that has no access to type checker (but has access to symbol table) or it's a regular macro that skips type checking of trees that are being passed to it during expansion?In any case, I think the general idea of applying macro but not rewriting trees (and preserving tree nodes for macro application and it's arguments) still works here.3) You can say that a macro returns Any if you don't know the type of expansion in advance (e.g. if that's an anonymous type generated on the fly). However callsites of the macro won't be restricted to Any, but will be able to use whatever type the expansion has. E.g. if a macro, which has return type Any, expands into "new { def x = 2 }", then the callsite can call the method x of the result: http://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros.Yes, I'm aware of this problem. However, it's not very different from regular type inference for method calls. If you have:def id[T](x: T): T = xdef foo = id(12)you don't know the type of `id(12)` until you type check the application. After type-checking you get:def foo = id[Int](12) // the Apply node has the Int type setbut does not mean you need to inline `id` application. You still have an apply node. I'd like to do the same for macro applications. You type check them and set the correct return type for Apply node (so your example with new { def x = 2 } works) but original tree for macro application is preserved.We have a very good example of situation where there's distinction between type checking trees and doing actual expansion: it's pattern matching. Patterns are type checked during typer but trees are expanded during `patmat` phase. I think that's great, clean design that macros should consider following.WDYT?
What tree do you propose pass to `macro1` here? Apply(<macro2>, [x]), or <expansion of macro2(x)> ?
macro1(macro2(x))
Dependency tracking should account for both the macro and the expansion, so you'll have a problem in either case. So the question becomes: which of the two trees do we pass as the primary tree, and which as an annotation? Eugene alluded to the risks of passing real trees (ie, ones that are intended to progress through the phases) as annotations -- they are ignored by Transformers and Traversers. But that might not be a big problem from dep. tracking -- so long as your traverser knows which annotation to descend into, you can find the link to the unexpanded macro application.
On 22 February 2013 14:57, Jason Zaugg <jza...@gmail.com> wrote:Dependency tracking should account for both the macro and the expansion, so you'll have a problem in either case. So the question becomes: which of the two trees do we pass as the primary tree, and which as an annotation? Eugene alluded to the risks of passing real trees (ie, ones that are intended to progress through the phases) as annotations -- they are ignored by Transformers and Traversers. But that might not be a big problem from dep. tracking -- so long as your traverser knows which annotation to descend into, you can find the link to the unexpanded macro application.The problem is that in your example macro1 can throw away result of macro2 expansion and thus all the attachments. Is there any way dependency tracker can recover from that situation that I'm missing?
The pre-expansion attachment of macro1 should have, as an argument, the expansion of macro2, which would, in turn, have a pre-expansion attachment that referring to `macro2` itself.
Mhhh, I don't think it depends on a specific Scala version, it is just uncovered by not doing constant inlining. Should I add a diff to the bug tracker?
... or maybe I don't really understand how the build step should look like. Should I just push the change into my own repo and refer to that SHA1?
By the way, won't getting rid of constant inlining simplify the tracking of changes and its dependencies in things like SBT? I imagine that this would lead to fewer recompilations down the road.
Yes, push sha1 to github and refer to it. I'll have a look.
Actually, constant inlining hides dependencies so it could cause sbt to not recompile if the constant has changed (which is incorrect). At the moment sbt uses different mechanism for tracking those dependencies than walking trees (see my first e-mail in this thread) so it is immune to the problem we are discussing.
Also, if you think about it this two phase design makes sense for other reason. I could traverse trees twice:
- after type checking to capture dependencies on macro application and their arguments
- after expansion phase where I can capture dependencies of macro expansion result
Also, the cool thing about this design that we could move constant inlining to the expansion phase and solve problem with dependencies on constants as well.WDYT?
4.1) Type macros have two modes: a) "something extends
TypeMacro(args)", b) everything else like "type X = TypeMacro(args)".
To implement b, you need an API so synthesize classes. Otherwise one
would be forced to write boilerplate like "class Temp extends
TypeMacro(args); type X = Temp",
which even wouldn't work if args
refer to type parameters of X. More examples here:
https://github.com/scalamacros/kepler/blob/paradise/macros/test/files/run/macro-typemacros-used-in-funny-places-a/Test_2.scala.
4.2) In my opinion, from a philisophical standpoint synthesizing
classes out of nowhere looks only a bit scarier than synthesizing
members out of nowhere.
8) Thanks for the explanation! Now I think I understand your original
idea.
First I would like to note that function application and pattern
matching are quite different from macros, because they don't change
the semantics of code. At the moment macros can already introduce new
bindings and affect typechecking and type inference.
I agree that this versatility needs to be controlled, but it would be
crippling for macro experimentation to introduce limitations. An
example of such a limitation is the MacroApply node. Why limit macros
to just applications? What about type macros and, possibly, macro
annotations? Those would require different mechanisms, and some future
macros would probably require other mechanisms.
Also I'm not very optimistic about doing something non-trivial in
namer/typer and then delaying application of that something to a later
moment of time. Currently we have quite a roundabout way to synthesize
certain members, such as for example case class methods. This makes
understanding the typer harder, and it also introduces inconsistencies
- situations when the tree that we mean differs from the tree that we
actually see (an example of such a problem is Jason's question about
nested macro calls).
Sure, given enough effort and skill, one might be able to reproduce
the former from the latter, but that's something I would like to
impose neither on reflection API users, nor or scalac hackers. It's
somewhat similar to the situation we currently have with symbol
corruption when one transplants trees form one context to another,
which is an internal implementation detail leaking to macro users,
creating hurdles out of the blue.
7) If necessary, we can standardize the MacroExpansionAttachment (or
something similar) in the public API. From the experience of other
languages we can see that macro writers already need to cater for
debuggability of the code they produce and to take special measures to
play well IDEs. It would make sense to ask macro developers help the
incremental compiler as well. After all, who but not the macro
programmer knows best what's best for his/her macros?
E.g. just as we have the c.onInfer(TypeInferenceContext) callback to
guide type inference in http://docs.scala-lang.org/overviews/macros/inference.html,
we could have the
c.onCalculateDependencies(IncrementalCompilerContext) callback to help
dependency analysis. In comparison with MacroApply this has the
benefit of not limiting unexpected macro flavors and unexpected ways
to use macros. How does this sound?
Hi Greg, Eugene,
I'm afraid there is also third, most problematic kind of dependence:
dependencies that macro implementation uses while executing but those do not appear in explicit form in the expanded tree.
Introducing a member is one example of such dependence but it's enough to just 'look at' some entity during macro execution to introduce a dependency on it.
The same situation, when just asking something introduces a dependency appears in the inter-procedural analysis, and we jokingly call such effect a "quantum effect of measurement".
I believe the precise dependencies of a tree in the presence of macros can be defined as sum of three components:
1) the dependencies of (macro-expanded) tree
2) the bodies of all macros invoked during expansion of the tree
3) entities 'looked at' by macros executed during tree expansion.
In principle, (2) is just subset of more general (3), which can be computed by recording type/tree/symbol accesses during macro expansion.
Besides macros there are other language features which may require similar techniques to collect precise dependencies:
1) `for` desugaring: it can produce different code depending on the presence of `withFilter` method, so there is dependency on the fact of `withFilter` absence/presence.
2) implicits: result of implicit search depends on the whole contents of implicit scope, and also on all existing members of expession's type for "method missing" case.
3) applyDynamic: the tree depends on all members of receiver's type.
I'm not very familiar to this tree attachment buisness, so please explain:
Do you store attachments in the class files?
So you create some form of pdb (program database) and use it in subsequent compiler runs, right?
The intended way to use introduceTopLevel is to support type macros like the one shown above, so that people can only get a reference to the conjured type through an expansion of a type macro without any other way to get to that type. This doesn't create dependencies on the order of compilation. But I agree that c.introduceTopLevel seems fishy, because it can be misused (due to the same reason c.introduceMember didn't get much popularity), and I'm looking for ways to find alternatives for it.
This thing isn't mutually exclusive with what you propose. The default way will involve trees as you describe. However, if a macro programmer decides to go the extra mile to manifest some dependencies that cannot be discovered by static analysis, we should probably provide a way to do that. Maybe not right now, but rather when the API you're building is stabilized.The onCalculateDependencies is just a way to give control on dependency tracking to the macro writer if he/she desires. The exact protocol is a subject of discussion. Should we decide to use attachments, IncrementalCompilerContext will use attachments. If we need more information about kinds of dependencies, we could refine the context to expose more controls.