Dependency tracking and macros

231 views
Skip to first unread message

Grzegorz Kossakowski

unread,
Feb 21, 2013, 6:43:54 PM2/21/13
to scala-internals
Hi,

I'm reworking sbt's dependency tracking algorithm for incremental compiler. In sbt "dependency tracking" means multiple things so here by dependency I mean that given tree refers to given symbol. For example, in:

class Foo extends Bar[Int]

the template for 'Foo' depends on both 'Bar' and 'Int'.

We have in the compiler mechanism that tracks dependencies of a given compilation unit, see https://github.com/scala/scala/blob/master/src/compiler/scala/tools/nsc/CompilationUnits.scala#L61

However, that API is not granular enough. Specifically, I'd like to know what kind of tree refers to given symbol. For example, I'd like to make different decisions in incremental compiler depending of we just refer to members of 'Bar' by selection or we inherit from 'Bar'. Therefore, I want to walk trees (after typer) and extract the dependencies myself along with context information I need. Now, we're ready to see how macros make dependency analysis problematic due to the fact they are expanded in typer.

Consider code like this:

def foo = getName[Foo] // printName is a macro call

where getName is a simple macro that takes type tag and returns literal string for name of a type the tag represents. After typer, the code above will look like this:

def foo = "Foo" // look, no reference to Foo symbol!

So my phase analyzing dependencies will never see dependency on Foo due to macro expansion.

Another example of the same problem is constant inlining. Consider:

class JavaStatics { static int CONST = 12 }

and Scala code like that:

def bar: Int = JavaStatics.CONST 

after typer we get the following tree:

def bar: Int = 12

This is happening due to constant inlining done in Typer. With those two examples in mind I was about to propose that we defer macro expansion and constant inlining to later phase run after typer and turn typer to purely attribute phase which assigns types to trees and does nothing else. This way, macro application and it's arguments would be visible to my phase analyzing dependencies. The same goes for references to static fields and other compile-time constants.

However, Adriaan pointed out that pushing macros to later phase wouldn't fly with type macros that want to introduce new members. The alternative he proposed is that I look into attachments that store original trees before macro expansions. We would store original tree for constant inlining as an attachment. This sounds ok but not great. The interface for attachments is opaque. I don't know what kind of attachments I need analyze nor I do not have any guarantee that information about all dependencies is stored there. Ideally, I'd still prefer typer to not perform macro expansions and preserve macro applications. One could imagine, that for type macros typer would call macro implementation and would generate trees (and types) but it would store results of macro application in an attachment and perform macro call expansion after typer. How does this sound?

--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

Simon Ochsenreither

unread,
Feb 21, 2013, 7:05:17 PM2/21/13
to scala-i...@googlegroups.com
Just commenting on the constant inlining: Is there any reason why we emulate Java behaviour in the first place? As far as I know, it is featured heavily in JavaPuzzlers and regarded as a plain mistake by the Java designers itself.

Eugene Burmako

unread,
Feb 21, 2013, 7:06:20 PM2/21/13
to <scala-internals@googlegroups.com>
Some counterexamples:
1) What if a macro expands to something which introduces a dependency? We should probably track these dependencies as well.
2) What if a macro is untyped, i.e. its arguments aren't supposed to typecheck without the macro being expanded?
3) What if a macro has untyped return type, i.e. its usages won't typecheck if we treat a macro as an opaque def?
4) What about c.introduceMember?

"For type macros typer would call macro implementation and would generate trees (and types) but it would store results of macro application in an attachment and perform macro call expansion after typer." How would then the users of the newly generated trees be able to typecheck?


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Grzegorz Kossakowski

unread,
Feb 21, 2013, 7:20:03 PM2/21/13
to scala-i...@googlegroups.com
On 21 February 2013 16:06, Eugene Burmako <eugene....@epfl.ch> wrote:
Some counterexamples:
1) What if a macro expands to something which introduces a dependency? We should probably track these dependencies as well.

Ideally, yes. However, I'm curious to hear how to do that. If we are talking about statically known dependencies of a macro (e.g. reified trees that use static types) then we are fine: this will be recorded as dependency of a macro, so in case of a change of a dependency macro itself will get invalidated first and then all of it's expansions (assuming we can record the fact that we depend on macro application).

If we are talking about dependencies which are not statically known to a macro and its call site then I think we are out of luck here. I'm fine to accept that as a limitation.
 
2) What if a macro is untyped, i.e. its arguments aren't supposed to typecheck without the macro being expanded?

I don't know. I don't understand the concept of untyped macros and their relationship to type checking.
 
3) What if a macro has untyped return type, i.e. its usages won't typecheck if we treat a macro as an opaque def?

Can you have macro declared without any return type?
 
4) What about c.introduceMember?

I don't know because I don't understand how introduceMember is supposed to behave. Is it described anywhere?
 
"For type macros typer would call macro implementation and would generate trees (and types) but it would store results of macro application in an attachment and perform macro call expansion after typer." How would then the users of the newly generated trees be able to typecheck?

Users needs to be able to lookup symbols and types. They don't need to lookup trees, right?

Simon Ochsenreither

unread,
Feb 21, 2013, 7:20:42 PM2/21/13
to scala-i...@googlegroups.com
Just read up on it ... we should really drop it for 2.11. There is no value derived from it and it is completely inconsistent with other definitions of “constant”, e. g. for annotations.
Additionally, from having had a short look at the implementation of ConstantType, it doesn't seem to conform to spec anyway (hint: null).

Paul Phillips

unread,
Feb 21, 2013, 7:30:06 PM2/21/13
to scala-i...@googlegroups.com

On Thu, Feb 21, 2013 at 4:20 PM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
Just read up on it ... we should really drop it for 2.11. There is no value derived from it and it is completely inconsistent with other definitions of “constant”, e. g. for annotations.
 
Ha ha, sure we'll drop it for 2.11. Give that one a whirl. Don't forget to try to compile things afterward.

You can't run typer without having performed constant folding, because type inference depends on it. I think you'll find a bunch of things depend (explicitly or otherwise) on it happening approximately when it does. You could preserve the original trees, but I am skeptical that deferring it is even possible.

Simon Ochsenreither

unread,
Feb 21, 2013, 7:30:26 PM2/21/13
to scala-i...@googlegroups.com
Just tried and removed that part. Compilation of everything worked fine, but the continuation plugin caused an issue in quick:

[scalacfork] error: java.lang.NullPointerException
[scalacfork]     at scala.tools.nsc.SubComponent.<init>(SubComponent.scala:48)
[scalacfork]     at scala.tools.nsc.typechecker.Analyzer$namerFactory$.<init>(Analyzer.scala:32)
[scalacfork]     at scala.tools.nsc.Global$$anon$1.namerFactory$lzycompute(Global.scala:438)
[scalacfork]     at scala.tools.nsc.Global$$anon$1.namerFactory(Global.scala:438)
[scalacfork]     at scala.tools.nsc.Global.computeInternalPhases(Global.scala:652)
[scalacfork]     at scala.tools.nsc.Global.computePhaseDescriptors(Global.scala:694)
[scalacfork]     at scala.tools.nsc.Global.phaseDescriptors$lzycompute(Global.scala:701)
[scalacfork]     at scala.tools.nsc.Global.phaseDescriptors(Global.scala:701)
[scalacfork]     at scala.tools.nsc.Global$Run.<init>(Global.scala:1209)
[scalacfork]     at scala.tools.nsc.Driver.doCompile(Driver.scala:32)
[scalacfork]     at scala.tools.nsc.Main$.doCompile(Main.scala:53)
[scalacfork]     at scala.tools.nsc.Driver.process(Driver.scala:54)
[scalacfork]     at scala.tools.nsc.Driver.main(Driver.scala:67)
[scalacfork]     at scala.tools.nsc.Main.main(Main.scala)


This is basically caused by SubComponent's global value being null. It looks as if constant inlining papers over some class initialization issue. I can't imagine that this is intentional.
Should I file a bug?

Grzegorz Kossakowski

unread,
Feb 21, 2013, 7:31:52 PM2/21/13
to scala-i...@googlegroups.com
On 21 February 2013 16:05, Simon Ochsenreither <simon.och...@gmail.com> wrote:
Just commenting on the constant inlining: Is there any reason why we emulate Java behaviour in the first place? As far as I know, it is featured heavily in JavaPuzzlers and regarded as a plain mistake by the Java designers itself.

One example that comes to my mind is pattern matcher and it's exhaustivity checking. You could argue that we should use singleton types for that but they do not survive erasure. I don't know if we need to know about a constant after erasure, though.

Paul Phillips

unread,
Feb 21, 2013, 7:36:17 PM2/21/13
to scala-i...@googlegroups.com

On Thu, Feb 21, 2013 at 4:31 PM, Grzegorz Kossakowski <grzegorz.k...@gmail.com> wrote:
One example that comes to my mind is pattern matcher and it's exhaustivity checking. You could argue that we should use singleton types for that but they do not survive erasure. I don't know if we need to know about a constant after erasure, though.

Do you like generating switch statements? The pattern matcher can't unless the patterns have constant types.

object Test {
  final val A = 1
  final val B = 2
  final val C = A + B
  final val D = 4
  final val E = 5
  final val F = D + E

  def f(x: Int) = x match {
    case A | B | C => 1
    case D | E | F => 2
  }
  // public int f(int);
  // flags: ACC_PUBLIC
  // Code:
  //   stack=3, locals=3, args_size=2
  //      0: iload_1
  //      1: istore_2
  //      2: iload_2
  //      3: lookupswitch  { // 6
  //                    1: 76
  //                    2: 76
  //                    3: 76
  //                    4: 72
  //                    5: 72
  //                    9: 72
  //              default: 60
  //         }
}

If A..F below are ints you get a bit of a different story:

  public int f(int);
    flags: ACC_PUBLIC
    Code:
      stack=3, locals=6, args_size=2
         0: iload_1       
         1: istore_2      
         2: aload_0       
         3: invokevirtual #37                 // Method A:()I
         6: iload_2       
         7: if_icmpne     15
        10: iconst_1      
        11: istore_3      
        12: goto          43
        15: aload_0       
        16: invokevirtual #39                 // Method B:()I
        19: iload_2       
        20: if_icmpne     28
        23: iconst_1      
        24: istore_3      
        25: goto          43
        28: aload_0       
        29: invokevirtual #41                 // Method C:()I
        32: iload_2       
        33: if_icmpne     41
        36: iconst_1      
        37: istore_3      
        38: goto          43
        41: iconst_0      
        42: istore_3      
        43: iload_3       
        44: ifeq          53
        47: iconst_1      
        48: istore        4
        50: goto          106
        53: aload_0       
        54: invokevirtual #43                 // Method D:()I
        57: iload_2       
        58: if_icmpne     67
        61: iconst_1      
        62: istore        5
        64: goto          98
        67: aload_0       
        68: invokevirtual #45                 // Method E:()I
        71: iload_2       
        72: if_icmpne     81
        75: iconst_1      
        76: istore        5
        78: goto          98
        81: aload_0       
        82: invokevirtual #47                 // Method F:()I
        85: iload_2       
        86: if_icmpne     95
        89: iconst_1      
        90: istore        5
        92: goto          98
        95: iconst_0      
        96: istore        5
        98: iload         5
       100: ifeq          109
       103: iconst_2      
       104: istore        4
       106: iload         4
       108: ireturn       
       109: new           #49                 // class scala/MatchError
       112: dup           
       113: iload_2       
       114: invokestatic  #55                 // Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer;
       117: invokespecial #58                 // Method scala/MatchError."<init>":(Ljava/lang/Object;)V
       120: athrow        

Simon Ochsenreither

unread,
Feb 21, 2013, 7:41:49 PM2/21/13
to scala-i...@googlegroups.com
I'm not sure we are speaking about the same thing.

I'm neither talking about removing ConstantType nor about removing constant folding. See the compilation results. Apart from what I think is a bug, compilation worked successfully.

Adriaan Moors

unread,
Feb 21, 2013, 7:50:33 PM2/21/13
to scala-i...@googlegroups.com

On Thu, Feb 21, 2013 at 4:36 PM, Paul Phillips <pa...@improving.org> wrote:
Do you like generating switch statements? The pattern matcher can't unless the patterns have constant types.
right, we could still infer the constant types while leaving the expression as-is
the pattern matcher was recently fixed (on your suggestion, I believe) to only look at types of patterns to determine switchability

Daniel Sobral

unread,
Feb 21, 2013, 10:07:00 PM2/21/13
to scala-i...@googlegroups.com
On 21 February 2013 21:41, Simon Ochsenreither <simon.och...@gmail.com> wrote:
I'm not sure we are speaking about the same thing.

I'm neither talking about removing ConstantType nor about removing constant folding. See the compilation results. Apart from what I think is a bug, compilation worked successfully.

You are talking about replace an identifier with the literal to which it refers during the compilation, right?

Personally, I'm not sure how much this would impact runtime speed, and it's perfectly possible that JVM optimizes it away, but, speaking as someone who doesn't write a single literal in his code except for 0 and 1 for the most part, I'm not a fan of removing them. I wonder what Rex Kerr thinks about this.
 

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Daniel C. Sobral

I travel to the future all the time.

Simon Ochsenreither

unread,
Feb 21, 2013, 11:04:50 PM2/21/13
to scala-i...@googlegroups.com
Hi,


You are talking about replace an identifier with the literal to which it refers during the compilation, right?
Personally, I'm not sure how much this would impact runtime speed, and it's perfectly possible that JVM optimizes it away, but, speaking as someone who doesn't write a single literal in his code except for 0 and 1 for the most part, I'm not a fan of removing them. I wonder what Rex Kerr thinks about this.

I don't think there will be an impact on runtime speed because what the compiler does here is some of the most trivial optimizations of the JVM. It's a clear case of premature optimization which can cause a lot of weird behaviour in separate compilation. In fact, it was not even meant as an optimization, but for conditional compilation, like if (DEBUG) ... where the whole statement can be elided if DEBUG is known to be false.
Even the creator says “it's simply a mistake”: http://www.parleys.com/#st=5&id=2804&sl=11 18:24. It's right on the same slide as lossy widening conversions and == vs. equals.

I think if someone wants some value to be inlined, @inline is the way to go.

Thanks and bye,

Simon

Eugene Burmako

unread,
Feb 22, 2013, 7:19:54 AM2/22/13
to <scala-internals@googlegroups.com>
1) Fair enough.

2) In short, untyped macros are macros, which have arguments that might be untypeable. You can find a motivating use case and some information here: http://docs.scala-lang.org/overviews/macros/untypedmacros.html.

3) You can say that a macro returns Any if you don't know the type of expansion in advance (e.g. if that's an anonymous type generated on the fly). However callsites of the macro won't be restricted to Any, but will be able to use whatever type the expansion has. E.g. if a macro, which has return type Any, expands into "new { def x = 2 }", then the callsite can call the method x of the result: http://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros.

4) Yes, it's described in a scaladoc: https://github.com/scalamacros/kepler/blob/bafebe1c161f8db0be758c30fe5cc51082a56427/src/reflect/scala/reflect/macros/Synthetics.scala.

5) That is correct.

6) And, in general, how would macro expansions be typechecked? It wouldn't be enough to have the transformer of the macroexpand phase to subclass TypingTransformer. From what I remember about my experiences with patmat, localTyper can't do full-fledged typechecking (for one, it cannot correctly perform implicit search).


Eugene Burmako

unread,
Feb 22, 2013, 7:28:23 AM2/22/13
to scala-internals
5) I also have doubts that putting the result of a type macro
expansion into an annotation is going to work. As shown by SI-6187,
carrying trees between phases using attachments is easy to get wrong.

7) Btw why do you dislike the idea of storing original trees of a
macro expansion in an attachment? There's already a mechanism in place
for that
https://github.com/scalamacros/kepler/blob/466fc670a38836dfb81d75f72d46ddcaa12bc3bb/src/reflect/scala/reflect/internal/StdAttachments.scala#L36.
The payload of MacroExpansionAttachment is the unmodified original
tree, so there's a guarantee that it encompasses all the dependencies
that the user has writter.

On Feb 22, 1:19 pm, Eugene Burmako <eugene.burm...@epfl.ch> wrote:
> 1) Fair enough.
>
> 2) In short, untyped macros are macros, which have arguments that might be
> untypeable. You can find a motivating use case and some information here:http://docs.scala-lang.org/overviews/macros/untypedmacros.html.
>
> 3) You can say that a macro returns Any if you don't know the type of
> expansion in advance (e.g. if that's an anonymous type generated on the
> fly). However callsites of the macro won't be restricted to Any, but will
> be able to use whatever type the expansion has. E.g. if a macro, which has
> return type Any, expands into "new { def x = 2 }", then the callsite can
> call the method x of the result:http://stackoverflow.com/questions/13669974/static-return-type-of-sca...
> .
>
> 4) Yes, it's described in a scaladoc:https://github.com/scalamacros/kepler/blob/bafebe1c161f8db0be758c30fe...
> .
>
> 5) That is correct.
>
> 6) And, in general, how would macro expansions be typechecked? It wouldn't
> be enough to have the transformer of the macroexpand phase to subclass
> TypingTransformer. From what I remember about my experiences with patmat,
> localTyper can't do full-fledged typechecking (for one, it cannot correctly
> perform implicit search).
>
> On 22 February 2013 01:20, Grzegorz Kossakowski <
>
>
>
>
>
>
>
> grzegorz.kossakow...@gmail.com> wrote:
> > Scalac hacker at Typesafe <http://www.typesafe.com/>
> > twitter: @gkossakowski <http://twitter.com/gkossakowski>
> > github: @gkossakowski <http://github.com/gkossakowski>

Grzegorz Kossakowski

unread,
Feb 22, 2013, 2:48:12 PM2/22/13
to scala-i...@googlegroups.com
On 22 February 2013 04:19, Eugene Burmako <eugene....@epfl.ch> wrote:
1) Fair enough.

2) In short, untyped macros are macros, which have arguments that might be untypeable. You can find a motivating use case and some information here: http://docs.scala-lang.org/overviews/macros/untypedmacros.html.

Thanks for the link. I still don't understand if untyped macro means simply tree transformer that has no access to type checker (but has access to symbol table) or it's a regular macro that skips type checking of trees that are being passed to it during expansion?

In any case, I think the general idea of applying macro but not rewriting trees (and preserving tree nodes for macro application and it's arguments) still works here.
 
3) You can say that a macro returns Any if you don't know the type of expansion in advance (e.g. if that's an anonymous type generated on the fly). However callsites of the macro won't be restricted to Any, but will be able to use whatever type the expansion has. E.g. if a macro, which has return type Any, expands into "new { def x = 2 }", then the callsite can call the method x of the result: http://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros.

Yes, I'm aware of this problem. However, it's not very different from regular type inference for method calls. If you have:

def id[T](x: T): T = x
def foo = id(12)

you don't know the type of `id(12)` until you type check the application. After type-checking you get:

def foo = id[Int](12) // the Apply node has the Int type set

but does not mean you need to inline `id` application. You still have an apply node. I'd like to do the same for macro applications. You type check them and set the correct return type for Apply node (so your example with new { def x = 2 } works) but original tree for macro application is preserved.

We have a very good example of situation where there's distinction between type checking trees and doing actual expansion: it's pattern matching. Patterns are type checked during typer but trees are expanded during `patmat` phase. I think that's great, clean design that macros should consider following.

WDYT?
Ok, that is very problematic. Is there any reason we have an API like that? I thought macro types address similar problem differently. Being able to create new classes from anywhere sounds quite scary, no?
 
5) That is correct.

6) And, in general, how would macro expansions be typechecked? It wouldn't be enough to have the transformer of the macroexpand phase to subclass TypingTransformer. From what I remember about my experiences with patmat, localTyper can't do full-fledged typechecking (for one, it cannot correctly perform implicit search).

As I mentioned before the idea would be that you perform full type checking of macro expansions during typer (so you can determine return type of a macro) but you defer tree rewriting to the next phase. This way I can insert my phase and capture all the information I need about macro expansion.

Also, if you think about it this two phase design makes sense for other reason. I could traverse trees twice:
  1. after type checking to capture dependencies on macro application and their arguments
  2. after expansion phase where I can capture dependencies of macro expansion result
Also, the cool thing about this design that we could move constant inlining to the expansion phase and solve problem with dependencies on constants as well.

WDYT?

Grzegorz Kossakowski

unread,
Feb 22, 2013, 2:59:08 PM2/22/13
to scala-i...@googlegroups.com
On 22 February 2013 04:28, Eugene Burmako <eugene....@epfl.ch> wrote:
5) I also have doubts that putting the result of a type macro
expansion into an annotation is going to work. As shown by SI-6187,
carrying trees between phases using attachments is easy to get wrong.

The alternative is to introduce a new tree node. If we come to conclusion that we cannot reasonable reuse current tree nodes for macros then I'm all for that. If we had node like:

case class MacroApply(fun: Tree, args: List[Tree], expansion: Tree)

that would be ideal. Before type checking would expansion be EmptyTree and after it would contain the macro expansion. In the next phase run after typer you could apply very simple rewrite: MacroApply(_, _, expansion) => expansion.

The cool thing about it is that I could extract all the dependencies in one pass just by traversing trees after typer. The IDE could also benefit from this by being able to show a user result of macro expansion on hover or command. This way, the whole design would be a lot more explicit.
 

7) Btw why do you dislike the idea of storing original trees of a
macro expansion in an attachment? There's already a mechanism in place
for that
https://github.com/scalamacros/kepler/blob/466fc670a38836dfb81d75f72d46ddcaa12bc3bb/src/reflect/scala/reflect/internal/StdAttachments.scala#L36.
The payload of MacroExpansionAttachment is the unmodified original
tree, so there's a guarantee that it encompasses all the dependencies
that the user has writter.

Because I don't want to sbt or IDE to depend on internal compiler APIs. The ultimate goal is that both IDE and sbt will be able to work with public APIs (probably reflection API + some utilities from the compiler). I'm not sure if tree attachments should be exposed as an official API. As you mentioned before, they are very tricky to get right so we don't want to expose them and support them for years to come.

Now, if we need to make public API more fat, why not go with direct route of adding a tree node? I think macros are such a significant addition to Scala that they deserve their own tree node that allows us to express their nature in more direct way.

--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

Jason Zaugg

unread,
Feb 22, 2013, 5:17:53 PM2/22/13
to scala-i...@googlegroups.com
On Fri, Feb 22, 2013 at 8:48 PM, Grzegorz Kossakowski <grzegorz.k...@gmail.com> wrote:
On 22 February 2013 04:19, Eugene Burmako <eugene....@epfl.ch> wrote:
1) Fair enough.

2) In short, untyped macros are macros, which have arguments that might be untypeable. You can find a motivating use case and some information here: http://docs.scala-lang.org/overviews/macros/untypedmacros.html.

Thanks for the link. I still don't understand if untyped macro means simply tree transformer that has no access to type checker (but has access to symbol table) or it's a regular macro that skips type checking of trees that are being passed to it during expansion?

In any case, I think the general idea of applying macro but not rewriting trees (and preserving tree nodes for macro application and it's arguments) still works here.
 
3) You can say that a macro returns Any if you don't know the type of expansion in advance (e.g. if that's an anonymous type generated on the fly). However callsites of the macro won't be restricted to Any, but will be able to use whatever type the expansion has. E.g. if a macro, which has return type Any, expands into "new { def x = 2 }", then the callsite can call the method x of the result: http://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros.

Yes, I'm aware of this problem. However, it's not very different from regular type inference for method calls. If you have:

def id[T](x: T): T = x
def foo = id(12)

you don't know the type of `id(12)` until you type check the application. After type-checking you get:

def foo = id[Int](12) // the Apply node has the Int type set

but does not mean you need to inline `id` application. You still have an apply node. I'd like to do the same for macro applications. You type check them and set the correct return type for Apply node (so your example with new { def x = 2 } works) but original tree for macro application is preserved.

We have a very good example of situation where there's distinction between type checking trees and doing actual expansion: it's pattern matching. Patterns are type checked during typer but trees are expanded during `patmat` phase. I think that's great, clean design that macros should consider following.

WDYT?

What tree do you propose pass to `macro1` here? Apply(<macro2>, [x]), or <expansion of macro2(x)> ?

macro1(macro2(x)) 

-jason

Grzegorz Kossakowski

unread,
Feb 22, 2013, 5:47:43 PM2/22/13
to scala-i...@googlegroups.com
On 22 February 2013 14:17, Jason Zaugg <jza...@gmail.com> wrote:

What tree do you propose pass to `macro1` here? Apply(<macro2>, [x]), or <expansion of macro2(x)> ?

macro1(macro2(x)) 

Oh, good point, I haven't thought about nested macros applications. I'd say to stay uniform, I'd pass Apply(<macro2>, [x]) but I can see why it would not be desired option for many applications of macros.

On the other hand I can see use cases for passing non expanded macros. For example, if you wanted to disallow use of one macro in context of another macro what you want to have as argument is a tree that represents macro application.

This starts to remind me a difference between by-value and by-name parameters. Both have their use cases. You could argue that since Scala is strict by default it should follow the same rule for macros (that means, we should pass macro expansion in your example above) but then we are back to the problem of tracking dependencies. Jason, do you have ideas to share how to tackle this?

Jason Zaugg

unread,
Feb 22, 2013, 5:57:41 PM2/22/13
to scala-i...@googlegroups.com
I think we have to expand the macros "eagerly" during typers.

Dependency tracking should account for both the macro and the expansion, so you'll have a problem in either case. So the question becomes: which of the two trees do we pass as the primary tree, and which as an annotation? Eugene alluded to the risks of passing real trees (ie, ones that are intended to progress through the phases) as annotations -- they are ignored by Transformers and Traversers. But that might not be a big problem from dep. tracking -- so long as your traverser knows which annotation to descend into, you can find the link to the unexpanded macro application.

-jason

Grzegorz Kossakowski

unread,
Feb 22, 2013, 6:03:48 PM2/22/13
to scala-i...@googlegroups.com
On 22 February 2013 14:57, Jason Zaugg <jza...@gmail.com> wrote:

Dependency tracking should account for both the macro and the expansion, so you'll have a problem in either case. So the question becomes: which of the two trees do we pass as the primary tree, and which as an annotation? Eugene alluded to the risks of passing real trees (ie, ones that are intended to progress through the phases) as annotations -- they are ignored by Transformers and Traversers. But that might not be a big problem from dep. tracking -- so long as your traverser knows which annotation to descend into, you can find the link to the unexpanded macro application.

The problem is that in your example macro1 can throw away result of macro2 expansion and thus all the attachments. Is there any way dependency tracker can recover from that situation that I'm missing?

Jason Zaugg

unread,
Feb 22, 2013, 6:25:54 PM2/22/13
to scala-i...@googlegroups.com
On Sat, Feb 23, 2013 at 12:03 AM, Grzegorz Kossakowski <grzegorz.k...@gmail.com> wrote:
On 22 February 2013 14:57, Jason Zaugg <jza...@gmail.com> wrote:

Dependency tracking should account for both the macro and the expansion, so you'll have a problem in either case. So the question becomes: which of the two trees do we pass as the primary tree, and which as an annotation? Eugene alluded to the risks of passing real trees (ie, ones that are intended to progress through the phases) as annotations -- they are ignored by Transformers and Traversers. But that might not be a big problem from dep. tracking -- so long as your traverser knows which annotation to descend into, you can find the link to the unexpanded macro application.

The problem is that in your example macro1 can throw away result of macro2 expansion and thus all the attachments. Is there any way dependency tracker can recover from that situation that I'm missing?

The pre-expansion attachment of macro1 should have, as an argument, the expansion of macro2, which would, in turn, have a pre-expansion attachment that referring to `macro2` itself.

-jason

Grzegorz Kossakowski

unread,
Feb 22, 2013, 6:32:07 PM2/22/13
to scala-i...@googlegroups.com
On 22 February 2013 15:25, Jason Zaugg <jza...@gmail.com> wrote:
The pre-expansion attachment of macro1 should have, as an argument, the expansion of macro2, which would, in turn, have a pre-expansion attachment that referring to `macro2` itself.

Ah yes, that's true. This is enough.

Now, the same information (attachment) could be used by macro1 to determine that a tree passed macro1 is an expansion of macro2 call and flag that as an error if macro1 has desire to do that.

Eugene Burmako

unread,
Feb 22, 2013, 8:02:17 PM2/22/13
to scala-internals
2) It's the latter - a regular macro.

4.1) Type macros have two modes: a) "something extends
TypeMacro(args)", b) everything else like "type X = TypeMacro(args)".
To implement b, you need an API so synthesize classes. Otherwise one
would be forced to write boilerplate like "class Temp extends
TypeMacro(args); type X = Temp", which even wouldn't work if args
refer to type parameters of X. More examples here:
https://github.com/scalamacros/kepler/blob/paradise/macros/test/files/run/macro-typemacros-used-in-funny-places-a/Test_2.scala.

4.2) In my opinion, from a philisophical standpoint synthesizing
classes out of nowhere looks only a bit scarier than synthesizing
members out of nowhere.

8) Thanks for the explanation! Now I think I understand your original
idea.

First I would like to note that function application and pattern
matching are quite different from macros, because they don't change
the semantics of code. At the moment macros can already introduce new
bindings and affect typechecking and type inference.

I agree that this versatility needs to be controlled, but it would be
crippling for macro experimentation to introduce limitations. An
example of such a limitation is the MacroApply node. Why limit macros
to just applications? What about type macros and, possibly, macro
annotations? Those would require different mechanisms, and some future
macros would probably require other mechanisms.

Also I'm not very optimistic about doing something non-trivial in
namer/typer and then delaying application of that something to a later
moment of time. Currently we have quite a roundabout way to synthesize
certain members, such as for example case class methods. This makes
understanding the typer harder, and it also introduces inconsistencies
- situations when the tree that we mean differs from the tree that we
actually see (an example of such a problem is Jason's question about
nested macro calls).

Sure, given enough effort and skill, one might be able to reproduce
the former from the latter, but that's something I would like to
impose neither on reflection API users, nor or scalac hackers. It's
somewhat similar to the situation we currently have with symbol
corruption when one transplants trees form one context to another,
which is an internal implementation detail leaking to macro users,
creating hurdles out of the blue.

7) If necessary, we can standardize the MacroExpansionAttachment (or
something similar) in the public API. From the experience of other
languages we can see that macro writers already need to cater for
debuggability of the code they produce and to take special measures to
play well IDEs. It would make sense to ask macro developers help the
incremental compiler as well. After all, who but not the macro
programmer knows best what's best for his/her macros?

E.g. just as we have the c.onInfer(TypeInferenceContext) callback to
guide type inference in http://docs.scala-lang.org/overviews/macros/inference.html,
we could have the
c.onCalculateDependencies(IncrementalCompilerContext) callback to help
dependency analysis. In comparison with MacroApply this has the
benefit of not limiting unexpected macro flavors and unexpected ways
to use macros. How does this sound?

On Feb 22, 8:48 pm, Grzegorz Kossakowski
<grzegorz.kossakow...@gmail.com> wrote:
> On 22 February 2013 04:19, Eugene Burmako <eugene.burm...@epfl.ch> wrote:
>
> > 1) Fair enough.
>
> > 2) In short, untyped macros are macros, which have arguments that might be
> > untypeable. You can find a motivating use case and some information here:
> >http://docs.scala-lang.org/overviews/macros/untypedmacros.html.
>
> Thanks for the link. I still don't understand if untyped macro means simply
> tree transformer that has no access to type checker (but has access to
> symbol table) or it's a regular macro that skips type checking of trees
> that are being passed to it during expansion?
>
> In any case, I think the general idea of applying macro but *not* rewriting
> trees (and preserving tree nodes for macro application and it's arguments)
> still works here.
>
> > 3) You can say that a macro returns Any if you don't know the type of
> > expansion in advance (e.g. if that's an anonymous type generated on the
> > fly). However callsites of the macro won't be restricted to Any, but will
> > be able to use whatever type the expansion has. E.g. if a macro, which has
> > return type Any, expands into "new { def x = 2 }", then the callsite can
> > call the method x of the result:
> >http://stackoverflow.com/questions/13669974/static-return-type-of-sca...
> > .
>
> Yes, I'm aware of this problem. However, it's not very different from
> regular type inference for method calls. If you have:
>
> def id[T](x: T): T = x
> def foo = id(12)
>
> you don't know the type of `id(12)` until you type check the application.
> After type-checking you get:
>
> def foo = id[Int](12) // the Apply node has the Int type set
>
> but does not mean you need to inline `id` application. You still have an
> apply node. I'd like to do the same for macro applications. You type check
> them and set the correct return type for Apply node (so your example with
> new { def x = 2 } works) but original tree for macro application is
> preserved.
>
> We have a very good example of situation where there's distinction between
> type checking trees and doing actual expansion: it's pattern matching.
> Patterns are type checked during typer but trees are expanded during
> `patmat` phase. I think that's great, clean design that macros should
> consider following.
>
> WDYT?
>
> 4) Yes, it's described in a scaladoc:
>
> >https://github.com/scalamacros/kepler/blob/bafebe1c161f8db0be758c30fe...
> > .
>
> Ok, that is very problematic. Is there any reason we have an API like that?
> I thought macro types address similar problem differently. Being able to
> create new classes from anywhere sounds quite scary, no?
>
> > 5) That is correct.
>
> > 6) And, in general, how would macro expansions be typechecked? It wouldn't
> > be enough to have the transformer of the macroexpand phase to subclass
> > TypingTransformer. From what I remember about my experiences with patmat,
> > localTyper can't do full-fledged typechecking (for one, it cannot correctly
> > perform implicit search).
>
> As I mentioned before the idea would be that you perform full type checking
> of macro expansions during typer (so you can determine return type of a
> macro) but you defer tree rewriting to the next phase. This way I can
> insert my phase and capture all the information I need about macro
> expansion.
>
> Also, if you think about it this two phase design makes sense for other
> reason. I could traverse trees twice:
>
>    1. after type checking to capture dependencies on macro
> *application*and their arguments
>    2. after expansion phase where I can capture dependencies of macro
>    expansion *result*
>
> Also, the cool thing about this design that we could move constant inlining
> to the expansion phase and solve problem with dependencies on constants as
> well.
>
> WDYT?
>
> --
> Grzegorz Kossakowski

Simon Ochsenreither

unread,
Feb 22, 2013, 9:31:33 PM2/22/13
to scala-i...@googlegroups.com
What's the status of this?

It seems like constant inlining manages to paper over the issue that private var ownPhaseRunId = global.NoRunId in SubComponent is sometimes initialized too late. Not sure how exactly this is even supposed to be happening because global isn't even eligible to constant inlining.

Grzegorz Kossakowski

unread,
Feb 22, 2013, 10:04:26 PM2/22/13
to scala-i...@googlegroups.com
Yes, please.

Just make sure it's really reproducible with reasonable amount of effort (e.g. rebuilding scala compiler out of specific sha1).

Simon Ochsenreither

unread,
Feb 22, 2013, 10:30:23 PM2/22/13
to scala-i...@googlegroups.com
Mhhh, I don't think it depends on a specific Scala version, it is just uncovered by not doing constant inlining. Should I add a diff to the bug tracker?
... or maybe I don't really understand how the build step should look like. Should I just push the change into my own repo and refer to that SHA1?

By the way, won't getting rid of constant inlining simplify the tracking of changes and its dependencies in things like SBT? I imagine that this would lead to fewer recompilations down the road.

Grzegorz Kossakowski

unread,
Feb 22, 2013, 10:36:59 PM2/22/13
to scala-i...@googlegroups.com
On 22 February 2013 19:30, Simon Ochsenreither <simon.och...@gmail.com> wrote:
Mhhh, I don't think it depends on a specific Scala version, it is just uncovered by not doing constant inlining. Should I add a diff to the bug tracker?
... or maybe I don't really understand how the build step should look like. Should I just push the change into my own repo and refer to that SHA1?

Yes, push sha1 to github and refer to it. I'll have a look.
 
By the way, won't getting rid of constant inlining simplify the tracking of changes and its dependencies in things like SBT? I imagine that this would lead to fewer recompilations down the road.

Actually, constant inlining hides dependencies so it could cause sbt to not recompile if the constant has changed (which is incorrect). At the moment sbt uses different mechanism for tracking those dependencies than walking trees (see my first e-mail in this thread) so it is immune to the problem we are discussing.

Simon Ochsenreither

unread,
Feb 22, 2013, 11:55:11 PM2/22/13
to scala-i...@googlegroups.com
Hi Grzegorz,


Yes, push sha1 to github and refer to it. I'll have a look.

Ok, I'll do that. Thanks!

Bug: https://issues.scala-lang.org/browse/SI-7174
Branch: https://github.com/soc/scala/commits/poc/no-constant-inlining
SHA1: de4d854d5a2b9782d3a82e7aa94987f1eae40acf
 
Actually, constant inlining hides dependencies so it could cause sbt to not recompile if the constant has changed (which is incorrect). At the moment sbt uses different mechanism for tracking those dependencies than walking trees (see my first e-mail in this thread) so it is immune to the problem we are discussing.

Yes, I was assuming that SBT handles it. Getting rid of constant inlining would mean that it would need to recompile less, afaiu.

Thanks and bye,

Simon

Pavel Pavlov

unread,
Feb 23, 2013, 10:17:02 AM2/23/13
to scala-i...@googlegroups.com
Hi Greg, Eugene,


On Saturday, February 23, 2013 2:48:12 AM UTC+7, Grzegorz Kossakowski wrote:
Also, if you think about it this two phase design makes sense for other reason. I could traverse trees twice:
  1. after type checking to capture dependencies on macro application and their arguments
  2. after expansion phase where I can capture dependencies of macro expansion result
Also, the cool thing about this design that we could move constant inlining to the expansion phase and solve problem with dependencies on constants as well.

WDYT?

I'm afraid there is also third, most problematic kind of dependence:
dependencies that macro implementation uses while executing but those do not appear in explicit form in the expanded tree.
Introducing a member is one example of such dependence but it's enough to just 'look at' some entity during macro execution to introduce a dependency on it.

The same situation, when just asking something introduces a dependency appears in the inter-procedural analysis, and we jokingly call such effect a "quantum effect of measurement".

I believe the precise dependencies of a tree in the presence of macros can be defined as sum of three components:
1) the dependencies of (macro-expanded) tree
2) the bodies of all macros invoked during expansion of the tree
3) entities 'looked at' by macros executed during tree expansion.
In principle, (2) is just subset of more general (3), which can be computed by recording type/tree/symbol accesses during macro expansion.

Besides macros there are other language features which may require similar techniques to collect precise dependencies:
1) `for` desugaring: it can produce different code depending on the presence of `withFilter` method, so there is dependency on the fact of `withFilter` absence/presence.
2) implicits: result of implicit search depends on the whole contents of implicit scope, and also on all existing members of expession's type for "method missing" case.
3) applyDynamic: the tree depends on all members of receiver's type.
4) I wonder if dependencies of the same sort may occur due to regular type inference?

Grzegorz Kossakowski

unread,
Feb 27, 2013, 7:55:21 PM2/27/13
to scala-i...@googlegroups.com
Hi Eugene and others.

I had to let this interesting discussion to rest for a few days as I had to catch up with other duties I had. Now I'm back to the game!

On 22 February 2013 17:02, Eugene Burmako <eugene....@epfl.ch> wrote:
4.1) Type macros have two modes: a) "something extends
TypeMacro(args)", b) everything else like "type X = TypeMacro(args)".
To implement b, you need an API so synthesize classes. Otherwise one
would be forced to write boilerplate like "class Temp extends
TypeMacro(args); type X = Temp",

Well, the boilerplate can be done by macro. That does not justify existence of introduceTopLevel, no?
 
which even wouldn't work if args
refer to type parameters of X. More examples here:
https://github.com/scalamacros/kepler/blob/paradise/macros/test/files/run/macro-typemacros-used-in-funny-places-a/Test_2.scala.

I might be missing something but I don't see an example of reference to args of X (typedef's left hand side). Can you point me at the exact line in that file?
 
4.2) In my opinion, from a philisophical standpoint synthesizing
classes out of nowhere looks only a bit scarier than synthesizing
members out of nowhere.

I agree. The problem I have understanding the current design is that introduceTopLevel can be called at any time anywhere. Therefore it might introduce interesting problems with ordering in which compilation units are type checked.

We should have design where order of files passed to the compiler does not matter. The very presence of introduceTopLevel seems to go in opposite direction, no?
 
8) Thanks for the explanation! Now I think I understand your original
idea.

First I would like to note that function application and pattern
matching are quite different from macros, because they don't change
the semantics of code. At the moment macros can already introduce new
bindings and affect typechecking and type inference.

True but still those effects should be somehow local or encapsulated, right? See my thoughts above.
 
I agree that this versatility needs to be controlled, but it would be
crippling for macro experimentation to introduce limitations. An
example of such a limitation is the MacroApply node. Why limit macros
to just applications? What about type macros and, possibly, macro
annotations? Those would require different mechanisms, and some future
macros would probably require other mechanisms.

Well, we need some API we can agree on.

I got convinced by Jason that just using tree attachments for preserving trees before macro application is the right way to go so we can drop the idea of MacroApply.
 
Also I'm not very optimistic about doing something non-trivial in
namer/typer and then delaying application of that something to a later
moment of time. Currently we have quite a roundabout way to synthesize
certain members, such as for example case class methods. This makes
understanding the typer harder, and it also introduces inconsistencies
- situations when the tree that we mean differs from the tree that we
actually see (an example of such a problem is Jason's question about
nested macro calls).

Sure, given enough effort and skill, one might be able to reproduce
the former from the latter, but that's something I would like to
impose neither on reflection API users, nor or scalac hackers. It's
somewhat similar to the situation we currently have with symbol
corruption when one transplants trees form one context to another,
which is an internal implementation detail leaking to macro users,
creating hurdles out of the blue.

I agree we can drop the idea of delaying macro expansion and just go with what Jason proposed.
 
7) If necessary, we can standardize the MacroExpansionAttachment (or
something similar) in the public API. From the experience of other
languages we can see that macro writers already need to cater for
debuggability of the code they produce and to take special measures to
play well IDEs. It would make sense to ask macro developers help the
incremental compiler as well. After all, who but not the macro
programmer knows best what's best for his/her macros?

I agree we should expose some attachments as a stable API.
 

E.g. just as we have the c.onInfer(TypeInferenceContext) callback to
guide type inference in http://docs.scala-lang.org/overviews/macros/inference.html,
we could have the
c.onCalculateDependencies(IncrementalCompilerContext) callback to help
dependency analysis. In comparison with MacroApply this has the
benefit of not limiting unexpected macro flavors and unexpected ways
to use macros. How does this sound?

This sounds interesting but I don't see the whole picture yet. In particular, I don't see how exactly the information about dependencies would flow from Scala compiler to incremental compiler.

The idea I have is that there's one simple API that Scala compiler and incremental compiler uses, it's Trees, Symbols and Types. The way incremental compiler would extract dependencies is by walking a tree and looking at all referred symbols.

The tricky bit is that if you have dependency on a given symbol you also want to know in what context (tree context) does it appear. In order to explain what I mean let's consider the following example:

// A.scala
abstract class A
// B.scala
class B extends A
// C.scala
class C { def foo(a: A) = ... }

Here both B and C depend on A but B depends on A by inheritance and C depends on A by reference. To see why this matters consider scenario that we add a new abstract def to A. Now, B has to be recompiled so we'll run refchecks on B and (maybe) detect that newly introduced member is not implemented by B. Now, C does not need to be recompiled (let's forget about implicit conversions and other details for now) because there's no way it can be affected by a new member in A.

That's very simple example but there' are more examples like that where you need to look at the whole context in a tree to understand the kind of dependency you get. The problem is that I don't have complete list of all of those kinds of dependencies that we want to recognize. If we had it, it would mean that work in incremental compilation is done and we achieved perfection but we are far from that. Thus I want to design an API where incremental compiler can evolve. That's why I want to pull dependencies out of trees and have freedom on incremental compiler side to classify discovered dependencies as I want.

To sum up: I see trees as a way for compiler and incremental compiler to communicate and evolve at their own pace.

If we can find other design I'm all ears.

--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

Grzegorz Kossakowski

unread,
Feb 27, 2013, 8:04:02 PM2/27/13
to scala-i...@googlegroups.com
On 23 February 2013 07:17, Pavel Pavlov <pavel.e...@gmail.com> wrote:
Hi Greg, Eugene,
I'm afraid there is also third, most problematic kind of dependence:
dependencies that macro implementation uses while executing but those do not appear in explicit form in the expanded tree.
Introducing a member is one example of such dependence but it's enough to just 'look at' some entity during macro execution to introduce a dependency on it.

I agree. The challenge is to come up with an API for communicating those dependencies in a way that:
  • allows to evolve incremental compiler without changes to the compiler
  • is efficient in implementation: we do not consume too much of memory
  • captures enough of information  (see my other post on dependency kinds)
  • is practical in terms of complexity of implementation
So far I decided to settle down on trees. 

The same situation, when just asking something introduces a dependency appears in the inter-procedural analysis, and we jokingly call such effect a "quantum effect of measurement".

I believe the precise dependencies of a tree in the presence of macros can be defined as sum of three components:
1) the dependencies of (macro-expanded) tree
2) the bodies of all macros invoked during expansion of the tree
3) entities 'looked at' by macros executed during tree expansion.
In principle, (2) is just subset of more general (3), which can be computed by recording type/tree/symbol accesses during macro expansion.

Yes, as tree attachments.
 
Besides macros there are other language features which may require similar techniques to collect precise dependencies:
1) `for` desugaring: it can produce different code depending on the presence of `withFilter` method, so there is dependency on the fact of `withFilter` absence/presence.
2) implicits: result of implicit search depends on the whole contents of implicit scope, and also on all existing members of expession's type for "method missing" case.

Those two are indeed very problematic. However, one should remember that we do not strive for the most precise dependency tracking. If we can come up with a design where tricky cases are handled at expense of recompiling a little bit too much (e.g. changes to implicits always trigger full recompilation) and simple implementation then I consider this as a winner.
 
3) applyDynamic: the tree depends on all members of receiver's type.

Fortunately enough this is easy to handle by just walking trees. 

Pavel Pavlov

unread,
Feb 27, 2013, 11:37:35 PM2/27/13
to scala-i...@googlegroups.com
I'm not very familiar to this tree attachment buisness, so please explain:
Do you store attachments in the class files?

Grzegorz Kossakowski

unread,
Feb 27, 2013, 11:50:05 PM2/27/13
to scala-i...@googlegroups.com
On 27 February 2013 20:37, Pavel Pavlov <pavel.e...@gmail.com> wrote:
I'm not very familiar to this tree attachment buisness, so please explain:
Do you store attachments in the class files?

No (at least so far) but you don't need to. The design of incremental compiler is that it keeps its own index for tracking dependencies and indexing phase is run during Scala compilation (it's additional compiler phase) so we can capture all needed information from tree attachments.

Pavel Pavlov

unread,
Feb 27, 2013, 11:55:30 PM2/27/13
to scala-i...@googlegroups.com
So you create some form of pdb (program database) and use it in subsequent compiler runs, right?

Grzegorz Kossakowski

unread,
Feb 28, 2013, 12:19:14 AM2/28/13
to scala-i...@googlegroups.com
On 27 February 2013 20:55, Pavel Pavlov <pavel.e...@gmail.com> wrote:
So you create some form of pdb (program database) and use it in subsequent compiler runs, right?

Yes, in sbt it's called `Analysis`, see:


(unfortunately, this trait is wildly undocumented which I should fix since I understand it now)

Simon Ochsenreither

unread,
Feb 28, 2013, 2:36:34 AM2/28/13
to scala-i...@googlegroups.com
Hi Grzegorz,

I played with it a bit, trying the usual things like reordering and making it a lazy val/def (not possible, because it's required to be a stable path), but no new insights yet.
Did you find out anything?

Bye,

Simon

Eugene Burmako

unread,
Feb 28, 2013, 3:06:40 PM2/28/13
to <scala-internals@googlegroups.com>
The intended way to use introduceTopLevel is to support type macros like the one shown above, so that people can only get a reference to the conjured type through an expansion of a type macro without any other way to get to that type. This doesn't create dependencies on the order of compilation. But I agree that c.introduceTopLevel seems fishy, because it can be misused (due to the same reason c.introduceMember didn't get much popularity), and I'm looking for ways to find alternatives for it.

The onCalculateDependencies is just a way to give control on dependency tracking to the macro writer if he/she desires. The exact protocol is a subject of discussion. Should we decide to use attachments, IncrementalCompilerContext will use attachments. If we need more information about kinds of dependencies, we could refine the context to expose more controls.

This thing isn't mutually exclusive with what you propose. The default way will involve trees as you describe. However, if a macro programmer decides to go the extra mile to manifest some dependencies that cannot be discovered by static analysis, we should probably provide a way to do that. Maybe not right now, but rather when the API you're building is stabilized.



Grzegorz Kossakowski

unread,
Feb 28, 2013, 5:13:52 PM2/28/13
to scala-i...@googlegroups.com
On 28 February 2013 12:06, Eugene Burmako <eugene....@epfl.ch> wrote:
The intended way to use introduceTopLevel is to support type macros like the one shown above, so that people can only get a reference to the conjured type through an expansion of a type macro without any other way to get to that type. This doesn't create dependencies on the order of compilation. But I agree that c.introduceTopLevel seems fishy, because it can be misused (due to the same reason c.introduceMember didn't get much popularity), and I'm looking for ways to find alternatives for it.

Yes, that would be great.
 
The onCalculateDependencies is just a way to give control on dependency tracking to the macro writer if he/she desires. The exact protocol is a subject of discussion. Should we decide to use attachments, IncrementalCompilerContext will use attachments. If we need more information about kinds of dependencies, we could refine the context to expose more controls.

This thing isn't mutually exclusive with what you propose. The default way will involve trees as you describe. However, if a macro programmer decides to go the extra mile to manifest some dependencies that cannot be discovered by static analysis, we should probably provide a way to do that. Maybe not right now, but rather when the API you're building is stabilized.

I agree. Ok, it looks like we have a plan!

Thanks a lot for everybody who contributed to this rather lengthy thread!
Reply all
Reply to author
Forward
0 new messages