Type macros in Scala 2.11.0?

1,302 views
Skip to first unread message

John Sullivan

unread,
May 30, 2013, 1:09:35 PM5/30/13
to scala-i...@googlegroups.com
Good morning!

Any chance type macros are going to make it into Scala 2.11.0? I didn't see any mention of them in the 2.11.0-M3 release notes [1]. And my type macro project does not compile against M3 release ("macro is now a reserved word; usage as an identifier is deprecated").

Adriaan Moors

unread,
May 30, 2013, 3:47:39 PM5/30/13
to scala-i...@googlegroups.com
Hi John,

Good timing! We're currently finalizing our discussions around macros to present our macro roadmap at Scaladays. We're not ready to broadcast this to a wider audience, but I'm happy to hear scala-internals' thoughts.

For now, the short answer is that type macros will stay in paradise a bit longer. We're not ready to include them in 2.11.

Here's an outline of our current thinking. Macros and reflection will likely stay experimental in 2.11, but we'll distinguish two categories:
 (1) features we're planning to refine and support in 2.12 
 (2) features that may disappear at any time (usually we don't introduce experimental features that we're not planning to stabilize, but this being such a hard design challenge, we're going to make an exception).

Concretely:

Features in 2.11 that we encourage (likely still labelled experimental, but looking good for 2.12):
* blackbox macros (behave exactly like methods from a code understanding point of view)
* quasiquotes (a nicer way to write trees)
* macro bundles
* relaxed signatures of macro impls

Truly experimental 2.11 features (may disappear in 2.12, or they may stay):
* single-parametric type class materialization (see #2494)
* whitebox macros
* macro annotations (useful for pickling, slick, enums,...?)
(* not really a feature, but you can always cast down to reflect.internal types and get at the full compiler API, though we don't make any guarantees)

In paradise (Eugene's fork): not supported (by descending likelihood of making it to master):
* multi-parametric type class materialization aka fundep materialization (Iso's, extracting the essence of whitebox macros)
* introduceTopLevel -- currently in master, but I'd like to get it out (can always cast)
* untyped macros (https://github.com/aztek/scala-workflow)
* type macros

cheers
adriaan


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
See you at Scala Days (June 10–12, NYC)!

Simon Ochsenreither

unread,
May 30, 2013, 5:11:28 PM5/30/13
to scala-i...@googlegroups.com
Hi,


* macro annotations (useful for pickling, slick, enums,...?)

Having worked on enums, macro annotations are not enough to provide an API which I'd consider "high-quality".
 
In paradise (Eugene's fork): not supported (by descending likelihood of making it to master):
* multi-parametric type class materialization aka fundep materialization (Iso's, extracting the essence of whitebox macros)
* introduceTopLevel -- currently in master, but I'd like to get it out (can always cast)
* untyped macros (https://github.com/aztek/scala-workflow)
* type macros

Ugghh ... macros without type macros? Isn't that pretty pointless?

From my POV, it's better to remove macros alltogether then. The features which would be shipping are "neat", but I think they are not that useful in practice and they are certainly not worth the "perceived complexity" if they are so severely limited that one can't build things which are truly helpful to end-users (see F# type providers).

Anyway, people who need the "real" macro functionality could just change their Scala dependencies to the Macro Paradise ones, but it would be pretty sad if we had this huge fork where people had to replace their Scala libraries if they wanted to use some library which depended on stuff in the Macro Paradise.

And without untyped macros we can completely forget to ever have decent enums.

Bye,

Simon

Eiríkr Åsheim

unread,
May 30, 2013, 5:37:21 PM5/30/13
to scala-i...@googlegroups.com
On Thursday, May 30, 2013 5:11:28 PM UTC-4, Simon Ochsenreither wrote:
Ugghh ... macros without type macros? Isn't that pretty pointless?

From my POV, it's better to remove macros alltogether then. The features which would be shipping are "neat", but I think they are not that useful in practice and they are certainly not worth the "perceived complexity" if they are so severely limited that one can't build things which are truly helpful to end-users (see F# type providers).

So, I definitely sympathize with disappointment around the reluctance to release type macros, but I don't share this sentiment at all.

Macros are already an amazing tool to produce faster Scala code. This is arguably a bit of a niche interest, but it's one I have so I wouldn't want to see them gone.

-- Erik

P.S. I would have posted from my normal email address but recently my emails have not been getting through to the mailing list.

Grzegorz Kossakowski

unread,
May 30, 2013, 6:18:24 PM5/30/13
to scala-internals
On 30 May 2013 23:37, Eiríkr Åsheim <stark...@gmail.com> wrote:
On Thursday, May 30, 2013 5:11:28 PM UTC-4, Simon Ochsenreither wrote:
Ugghh ... macros without type macros? Isn't that pretty pointless?

From my POV, it's better to remove macros alltogether then. The features which would be shipping are "neat", but I think they are not that useful in practice and they are certainly not worth the "perceived complexity" if they are so severely limited that one can't build things which are truly helpful to end-users (see F# type providers).

So, I definitely sympathize with disappointment around the reluctance to release type macros, but I don't share this sentiment at all.

Macros are already an amazing tool to produce faster Scala code. This is arguably a bit of a niche interest, but it's one I have so I wouldn't want to see them gone.

Producing faster Scala code is not of a niche interest at all. That's one very important use case for macros which doesn't require type synthesis.

There're plenty of other use cases for def macros like Scala async, asserts on steroids, typesafe string interpolation, etc.
 
P.S. I would have posted from my normal email address but recently my emails have not been getting through to the mailing list.

Does it go to moderation queue or what's the problem?

--
Grzegorz Kossakowski
Scalac hacker at Typesafe
twitter: @gkossakowski

John Sullivan

unread,
May 30, 2013, 7:11:50 PM5/30/13
to scala-i...@googlegroups.com
Hi all,

Thanks for your detailed answer Adriaan, it is helpful.

Having been working on a type macro based library for the past couple months that I think is pretty cool [1], and having been preparing a presentation on that work for Scala Days this year, it is a little disappointing to hear that it's not going to make it into 2.11. But if it made it into 2.12, even as an experimental feature, I would be happy with that.

I think it might help if you could clarify a little bit between two possibilities: (1) type macros are not going in to 2.11 because they are not ready yet; and (2) type macros are not going in because we are having second thoughts that this was a good idea in the first place. I am assuming you are just holding off until things are firmed up a bit, rework/solidify the API, critical bugs worked out, etc. Right? That's how I parse Adriaan's response, but it not entirely clear. I can totally see taking your time and doing things right, but I sort of agree with Simon in that if you are going to support macros at all, type macros seem pretty important.

Best, John


--
You received this message because you are subscribed to a topic in the Google Groups "scala-internals" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scala-internals/91W0-PxMQ9Q/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to scala-interna...@googlegroups.com.

Paul Phillips

unread,
May 30, 2013, 10:47:16 PM5/30/13
to scala-i...@googlegroups.com

On Thu, May 30, 2013 at 4:11 PM, John Sullivan <john.sull...@gmail.com> wrote:
(1) type macros are not going in to 2.11 because they are not ready yet; and (2) type macros are not going in because we are having second thoughts that this was a good idea in the first place.

That's rigging the game a bit; by definition, an experimental feature is an experiment.


John Sullivan

unread,
May 30, 2013, 11:34:45 PM5/30/13
to scala-i...@googlegroups.com
Yes, you're right. I should just say I hope they make it. Best, John


Xuefeng Wu

unread,
May 31, 2013, 12:30:23 AM5/31/13
to scala-i...@googlegroups.com
Although macro is experimental, but String Interpolation dependence on macro and is a solid feature.

And Macro is a success experiment in practice and I think it's do not worry about 'experimental', but the API may be not stable.


--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 



--

吴雪峰/ Alan  敬上

Xuefeng Wu

unread,
May 31, 2013, 1:41:14 AM5/31/13
to scala-internals
In my option, Macro API is not good enough now.
It is too low level and it need may duplicate code when apply macro.

It's better like ANTLR could process some defined programming grammar.
--

吴雪峰/ Alan  敬上

Adriaan Moors

unread,
May 31, 2013, 2:52:32 AM5/31/13
to scala-i...@googlegroups.com

On Thu, May 30, 2013 at 2:11 PM, Simon Ochsenreither <simon.och...@gmail.com> wrote:
The features which would be shipping are "neat", but I think they are not that useful in practice and they are certainly not worth the "perceived complexity" if they are so severely limited that one can't build things which are truly helpful to end-users (see F# type providers).

I think that's a bit of an over-generalization. I can think of at least two ways of implementing type providers with the macro features planned for 2.11.
1. Using what Eugene calls "fundep materialization" (whitebox implicit macros that influence their result type by synthesizing an expression for the required type class instance), or, 
2. more directly, using macro annotations, which are intended for this exact purpose.  We haven't implemented macro annotations the way we'd like to yet, but the idea is that they can supply definitions to be added as members to the class they annotate.

Type macros are much, much more general in that they can occur whenever a type may occur. That comes at a tremendous cost, for code understanding, specification and implementation. What does it mean to have an arbitrary computation at a type position? Is that really crucial?

Adriaan Moors

unread,
May 31, 2013, 2:55:11 AM5/31/13
to scala-i...@googlegroups.com
Hi John,

I've only had a brief look at congeal, but I would expect macro annotations to be sufficient for your use case. As long as you're only using type macros as a parent type in a class definition, it should be easy to use a macro annotation instead. Note that, as I said in another message that macro annotations aren't implemented yet (the way I would like).

cheers
adriaan


On Thu, May 30, 2013 at 4:11 PM, John Sullivan <john.sull...@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.
 
 

Eugene Burmako

unread,
May 31, 2013, 3:05:38 AM5/31/13
to <scala-internals@googlegroups.com>
The description of the idea is here: http://scabl.blogspot.ch/2013/03/cbdi-2.html. You can take a look at some code snippets and that will immediately show some of the use cases, e.g. "trait UServiceComponent { self: api[URepositoryComponent] => ... }".

Eiríkr Åsheim

unread,
May 31, 2013, 10:18:52 AM5/31/13
to scala-i...@googlegroups.com
Anyone who uses shapeless probably considers computation at the type level crucial! :)

Adriaan Moors

unread,
May 31, 2013, 12:24:24 PM5/31/13
to scala-i...@googlegroups.com
I'm all for type-level computation. I don't think type macros are the way to go.
In an ideal world, I'd replace implicits + type level (prolog at the type level) by type functions (a decidable subset of scala lifted to the type-level).

Macros make sense at the value level because usually method evaluation happens in the run-time stage.
Types are already "evaluated" in the compile-time stage, so no need for macros there.

In the mean time, there's a lot you can do with whitebox macros and type class instance materialization already.


On Fri, May 31, 2013 at 7:18 AM, Eiríkr Åsheim <stark...@gmail.com> wrote:
Anyone who uses shapeless probably considers computation at the type level crucial! :)

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Simon Ochsenreither

unread,
May 31, 2013, 12:51:58 PM5/31/13
to scala-i...@googlegroups.com


I think that's a bit of an over-generalization. I can think of at least two ways of implementing type providers with the macro features planned for 2.11.

I think the classic test is FreeBase. FreeBase has thousands of types, which need to be lazily instantiated if you don't want to pull down the complete database schema for some simple type completion.
How would macro annotations help here? Having macro annotations which in turn create members with even more macro annotations?

Anyway, my experience with macros in general is that it is too early to tell which parts and which functionality will be needed and which doesn't. Example: The tools to build decent enums are all in the third category (and some are not even there yet).

Tom Switzer

unread,
May 31, 2013, 12:56:51 PM5/31/13
to scala-i...@googlegroups.com
I think that's kind of the thing though. A lot of desired functionality of type macros could perhaps be subsumed by some other things (as mentioned by Adriaan above). I think Shapeless' type level natural number computations are great. If they could be written as lifted type level functions, that'd actually still solve a lot of problems.

That said, I think it can be said that def macros are incredibly useful. We make use of them in Spire for performance (big win for us), but also for auto-generation of instances of type classes.


--

John Sullivan

unread,
May 31, 2013, 12:57:02 PM5/31/13
to scala-i...@googlegroups.com
Excellent!

I'll be happy to migrate the code to use macro annotations when it seems like the appropriate time. Maybe that time is now? I see that you listed macro annotations as a "truly experimental 2.11 feature". That sure beats being at the bottom of the pile with type macros. Could someone point me to a good place to start learning about macro annotations? I don't see them mentioned on the Scala Documentation page [1].

I had a feeling that type macros might have been more than I needed to make it work. I was actually thinking I might be able to write a monad to do it, but I do need to be able to extract an API from an arbitrary trait.

Thanks Adriaan et al.

Best, John

Rex Kerr

unread,
May 31, 2013, 12:58:51 PM5/31/13
to scala-i...@googlegroups.com
I agree regarding type-level computation.  But why restrict yourself to decidable subset instead of merely (operationally) requiring it to be computable?  Having a Turing-complete type system doesn't seem to have caused a problem so far.  You can always have a supervisor thread slaughtering the build if a type computation does not terminate in a reasonable amount of time (set as a command-line option).

  --Rex

Adriaan Moors

unread,
May 31, 2013, 1:02:12 PM5/31/13
to scala-i...@googlegroups.com
I agree we could enforce decidability trivially with a timer. The nice side effect of decidability is that you end up with a smaller subset that would be easier to interpret.
As I was alluding to with my staging comment, I don't want type functions to have to be compiler separately and run them reflectively against an API.
They must be a spec'ed subset of the language that doesn't explicitly call into the compiler. 

Rex Kerr

unread,
May 31, 2013, 1:15:42 PM5/31/13
to scala-i...@googlegroups.com
It would be nice to not have to compile type functions, but I'm worried that if it works like broken Scala, it's going to be syntactically clunky at best and a source of continued confusion.  ("Wait, why can't I do that?")  And I am not convinced that what we want to do with type computation, given that the input is the output of a Turing machine, can be sensibly restricted to an innately decidable language.  Any sort of recursion with conditions would probably be forbidden, for example, unless you added rather complex machinery to decide whether this particular form of recursion is decidable.  My hunch is at that point it's better to just let the whole language loose and just accept that there may be multiple passes of compiling.  (But then there are tricky issues of dependencies; I'm not sure if mandating a lack of dependency gives adequate flexibility.)

In terms of non-obviousness of how to do it best/well/right, this feature ranks pretty high.

  --Rex

Adriaan Moors

unread,
May 31, 2013, 1:17:54 PM5/31/13
to scala-i...@googlegroups.com
I think the classic test is FreeBase. FreeBase has thousands of types, which need to be lazily instantiated if you don't want to pull down the complete database schema for some simple type completion.
How would macro annotations help here? Having macro annotations which in turn create members with even more macro annotations?
They won't. You don't need type macros nor annotation macros to create new types for auto-completion.
Any (whitebox) macro can create new (local) anonymous classes. Since you're talking auto-completion, we're at the value level (you're auto-completing a call on a value).
If that value corresponds to an expression synthesized by your macro, you control the type of that value.
This is how Grzegorz implemented auto-completion on arbitrary REST APIs (if I understood correctly).

Annotation macros are intended to introduce new types (nested in the type they annotate), but this is indeed a one-shot deal.
That's a crucial restriction so that we stand a chance of supporting them in IDEs.

Type macros are about intercepting the creation of types when there's no corresponding macro-created value.
 
Anyway, my experience with macros in general is that it is too early to tell which parts and which functionality will be needed and which doesn't. Example: The tools to build decent enums are all in the third category (and some are not even there yet).
I'm not arguing necessity. I'm arguing that type macros are not something I want to add to the language on principle.

In my opinion (and I was part of the team that implemented this, so any pointed fingers are curved right back at myself),
integrating implicits + type inference to yield type-level computation is our million-dollar mistake (it's not too late to keep it from costing billions, fortunately).

It's impossible to fix bugs in our type-level "language" because it's impossible to spec that language.
This holds a fortiori for type macros.

We want Scala to become simpler (to understand) as we evolve it.

This is why we emphasize blackbox macros, as their users can think of them as normal method calls.
Thus, none of the complexity budget is spent from the perspective of consumers of those abstractions (of course, macro authors need to invest in implementing them)

 

Adriaan Moors

unread,
May 31, 2013, 1:20:53 PM5/31/13
to scala-i...@googlegroups.com

On Fri, May 31, 2013 at 10:15 AM, Rex Kerr <ich...@gmail.com> wrote:
In terms of non-obviousness of how to do it best/well/right, this feature ranks pretty high.

Indeed. That's why we're holding off.

In my head, type functions would operate strictly on types (input and output, with values snuck in through singleton types),
so that I think it would be relatively easy to restrict the language to be decidable.

Essentially, it would look like pattern matching on types with structural recursion.
The tricky bit is to define a useful "structure/size" metric for types.

My inspiration for all this is Tim Sheard's Omega. 

Adriaan Moors

unread,
May 31, 2013, 1:36:34 PM5/31/13
to scala-i...@googlegroups.com

On Thu, May 30, 2013 at 2:37 PM, Eiríkr Åsheim <stark...@gmail.com> wrote:
P.S. I would have posted from my normal email address but recently my emails have not been getting through to the mailing list.

yeah, it was stuck in the spam trap as Greg surmised -- your plastic address is now forever pre-approved for non-spamminess

Eugene Burmako

unread,
May 31, 2013, 1:38:17 PM5/31/13
to scala-i...@googlegroups.com
It's a shame to see macros deemed unuseful for type-level computations, because of some implementation warts such as the separate compilation requirement.

​In my opinion, macros are the ultimate answer to compile-time programmability. Homogeneity is a very important property of a metaprogramming facility, otherwise we're going to end up with another DSL ala the current implicit-based DSL  for type-computations.

​I sympathize the decideability argument, but it does not rule out type macros, since noone prevents us from coming up with a mechanism to restrict the bodies of macro impls to acceptable shapes.

​Therefore I suggest we approach the type-level computation problem systemically by thinking how it melds into the metaprogramming discourse in  general. Another type-level DSL is a thing in itself, but pushing the state of the macro art will benefit the entire language (e.g. the decideability restrictor would be useful outside macros, the reflection API to work with types would and is useful outside macros, but with the DSL approach those will be forever locked down to that particular DSL). 


--

Eugene Burmako

unread,
May 31, 2013, 1:51:08 PM5/31/13
to scala-internals
>> What does it mean to have an arbitrary computation at a type position?
If we compare type macros and type functions here, what would be the
crucial difference?

Only the possibility for arbitrary computations or something else? I
would imagine it's not the notion of doing computations at the type
level that is scary, right? (Because we already have such a capability
built into type aliases)

On May 31, 8:52 am, Adriaan Moors <adriaan.mo...@typesafe.com> wrote:
> On Thu, May 30, 2013 at 2:11 PM, Simon Ochsenreither <
>

Eugene Burmako

unread,
May 31, 2013, 1:57:19 PM5/31/13
to scala-internals
Oh right, I forgot to mention this in my earlier general message about
macros, but both "reflectively" and "against API" are just
implementation details. Macros can be interpreted, which would lift
the separate compilation / reflective invocation issue, and then can
use quasiquotes and type quasiquotes to abstract away the API.

Having done all that, we'll have purified the essence of compile-time
computations. We'll have familiar Scala code (optionally restricted
for decideability) that runs at compile-time and creates/analyzes
Scala code using templates expressed as Scala code. Exactly zero
conceptual overhead (no notion of separate compilation, no notion of
trees, etc), but drastically increased possibilities of the language.
Isn't that paradise?
> >> On Fri, May 31, 2013 at 7:18 AM, Eiríkr Åsheim <stark.f...@gmail.com>wrote:
>
> >>> Anyone who uses shapeless probably considers computation at the type
> >>> level crucial! :)
>
> >>> --
> >>> You received this message because you are subscribed to the Google
> >>> Groups "scala-internals" group.
> >>> To unsubscribe from this group and stop receiving emails from it, send
> >>> an email to scala-interna...@googlegroups.com.
> >>> For more options, visithttps://groups.google.com/groups/opt_out.
>
> >> --
> >> See you at Scala Days <http://scaladays.org> (June 10–12, NYC)!
>
> >> --
> >> You received this message because you are subscribed to the Google Groups
> >> "scala-internals" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an
> >> email to scala-interna...@googlegroups.com.
> >> For more options, visithttps://groups.google.com/groups/opt_out.
>
> >  --
> > You received this message because you are subscribed to the Google Groups
> > "scala-internals" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to scala-interna...@googlegroups.com.
> > For more options, visithttps://groups.google.com/groups/opt_out.
>
> --
> See you at Scala Days <http://scaladays.org> (June 10–12, NYC)!

Grzegorz Kossakowski

unread,
May 31, 2013, 2:25:25 PM5/31/13
to scala-internals
On 31 May 2013 19:38, Eugene Burmako <xen...@gmail.com> wrote:
It's a shame to see macros deemed unuseful for type-level computations, because of some implementation warts such as the separate compilation requirement.

I strongly disagree with undermining the value of separate compilation. The idea idea of separate compilation is fairly old and it proven to be crucial for implementing big systems. One of the big contributions of Java (and JVM) is that it pushed separate compilation really, really far. Any meta-programming design should take into account how to scale to really large systems.

Eugene Burmako

unread,
May 31, 2013, 2:33:39 PM5/31/13
to <scala-internals@googlegroups.com>
I think it'd be the best to support both: interpreted style for lightweight macros and mixed or solely compiled style for big macros that require the best possible performance.


Grzegorz Kossakowski

unread,
May 31, 2013, 2:38:21 PM5/31/13
to scala-internals
On 31 May 2013 20:33, Eugene Burmako <eugene....@epfl.ch> wrote:
I think it'd be the best to support both: interpreted style for lightweight macros and mixed or solely compiled style for big macros that require the best possible performance.

Oh, I think I misunderstood you. I thought you meant the problems with separate compilation when you synthesize new top-level types. I see now that you meant just a problem of compiling macros before compiler can execute them. I find this to be of little concern as you can easily solve this with build tools and multiple projects. Is this really a problem in practice?

Rex Kerr

unread,
May 31, 2013, 2:43:13 PM5/31/13
to scala-i...@googlegroups.com
It's a problem with complex dependencies.

For simple dependencies, no biggie.

  --Rex


Miles Sabin

unread,
May 31, 2013, 2:47:44 PM5/31/13
to scala-internals
On Fri, May 31, 2013 at 5:24 PM, Adriaan Moors
<adriaa...@typesafe.com> wrote:
> On Fri, May 31, 2013 at 7:18 AM, Eiríkr Åsheim <stark...@gmail.com> wrote:
> > Anyone who uses shapeless probably considers computation at the type level crucial! :)
>
> I'm all for type-level computation. I don't think type macros are the way to go.
> In an ideal world, I'd replace implicits + type level (prolog at the type
> level) by type functions (a decidable subset of scala lifted to the
> type-level).

FTR, I'm largely in agreement with the above (although I wouldn't be
quite so disparaging about Prolog ;-)

My uses of macros in shapeless are largely workarounds for features
missing in the language and its type system, and I would much rather
see the problems dealt with at source.

> In the mean time, there's a lot you can do with whitebox macros and type
> class instance materialization already.

Indeed ... I should have some exciting new stuff to show by Scala Days :-)

Cheers,


Miles

--
Miles Sabin
tel: +44 7813 944 528
skype: milessabin
gtalk: mi...@milessabin.com
g+: http://www.milessabin.com
http://twitter.com/milessabin

Adriaan Moors

unread,
May 31, 2013, 2:53:55 PM5/31/13
to scala-i...@googlegroups.com

On Fri, May 31, 2013 at 11:47 AM, Miles Sabin <mi...@milessabin.com> wrote:
Indeed ... I should have some exciting new stuff to show by Scala Days :-)

Can't wait!!

Grzegorz Kossakowski

unread,
May 31, 2013, 2:56:08 PM5/31/13
to scala-internals
On 31 May 2013 20:43, Rex Kerr <ich...@gmail.com> wrote:
It's a problem with complex dependencies.

Is it documented anywhere? Is the problem any different from classical problem with complex dependencies where solution is to break things into smaller pieces very carefully?

Eugene Burmako

unread,
May 31, 2013, 3:12:20 PM5/31/13
to scala-internals
Why do we have to antagonize macros and the type system? They
complement each other. Types can grant macros expressiveness, whereas
macros can grant types a stairway from term level to type level.

Also I would like to see a precise line drawn between type macros and
type functions. So far I've only got an impression the former are sort
of hacky and uncool, whereas the latter are principled and the way to
go. Adriaan, could you please flesh out the definitions a bit more?

On May 31, 8:47 pm, Miles Sabin <mi...@milessabin.com> wrote:
> On Fri, May 31, 2013 at 5:24 PM, Adriaan Moors
>
> <adriaan.mo...@typesafe.com> wrote:

Rex Kerr

unread,
May 31, 2013, 3:26:48 PM5/31/13
to scala-i...@googlegroups.com
I'm not sure it's documented anywhere.  I know that Erik and Tom have got piles of plus(times(a,5),b) stuff in Spire specifically because they felt it was too difficult to break apart the use sites and the macro creation sites.  (That sort of stuff is very hard for me to parse, which really limits my ability/desire to contribute to Spire.)

And I know in my mathematical code, within the same package object I write defs that both are needed to define macros and which would benefit from having the macro defined.  I guess in principle I can layer the pieces on by making a bunch of traits, each extending the last, with eventually the package object extending the last trait, in order to build up prerequisites and macros.  But it'd be major headache to navigate a mess like that, especially since it would break up the natural groupings of topic.

I'm not sure whether it is possible to disentangle all valid circular dependencies by introducing traits at the right places.  I don't have a clear enough idea of what invariant or other property would need to be maintained to write something that looked like a proof one way or the other.  The compiler has more flexibility than we do for filling partial information in in phases, so I suspect an un-disentangeable example is possible.  But even if not the gyrations to avoid it would still cause a problem.

  --Rex



Adriaan Moors

unread,
May 31, 2013, 3:43:46 PM5/31/13
to scala-i...@googlegroups.com

On Fri, May 31, 2013 at 12:12 PM, Eugene Burmako <eugene....@epfl.ch> wrote:
Adriaan, could you please flesh out the definitions a bit more?

A type macro is to a type function as a def macro is to a method.
Macros are a staging mechanism -- they build a term for the next stage by calling into an unstable and unspecified compiler/reflection API. [*]
Type functions and value methods express their computation directly in terms of a specified language.

Syntactically, the language of type functions would be extremely simple: Scala types as they are now + pattern matching on types.
The challenge is to define the semantics. What's a reducible type? When do we beta-reduce, or "evaluate" the pattern match?
How do we kind-check these types? As I said before, I'm not that concerned with decidability.

These questions are answered in Omega for a Haskell-like language. If we're going to implement type level computation, I'd like to see a small subset of that work, tuned for Scala.

Examples of the kinds of computation I envision: anything necessary to express the types in HList and to remove the CanBuildFrom pattern from the collections.

[*] Yes, quasi-quotes are great as an abstraction mechanism here, but I don't see them as a solution for type-level computation, nor would a perfectly stable reflection API. There's no need for staging.

Eugene Burmako

unread,
May 31, 2013, 4:03:45 PM5/31/13
to <scala-internals@googlegroups.com>
Well, I think it's time for us to extend the definition of a macro to just a custom function that's runs at compile time. Even as they stand right now, macros can produce types that are used immediately during the compilation stage, so they already don't fit into the traditional definition.

Could you please explain the problem with quasiquotes that produce types? Using the analogue of the already well-received mechanism, they would be able to construct and deconstruct types without the necessity to introduce special syntax to lift pattern matching to type level. Also I don't think staging is relevant here. Macros operating on types and producing types don't do staging.

Speaking of Scala types + pattern matching on types, why abandon the abstraction mechanisms provided by Scala and reduce them to just functions? What if I would like to modularize my type functions in OO-style? How do I do that? Macros are first-class member of the language, one can import/unimport them, use mixin composition, etc. What would type functions offer the programmers in this regard?

Also it would be great if you could elaborate a bit more on the subset of Omega you would like to see in Scala. I remember your link to the section 18 of https://omega.googlecode.com/files/OmegaManual.pdf. Which small subset of it do you have mind?



--

Adriaan Moors

unread,
May 31, 2013, 5:35:53 PM5/31/13
to scala-i...@googlegroups.com
On Fri, May 31, 2013 at 1:03 PM, Eugene Burmako <eugene....@epfl.ch> wrote:
Speaking of Scala types + pattern matching on types, why abandon the abstraction mechanisms provided by Scala and reduce them to just functions? What if I would like to modularize my type functions in OO-style? How do I do that?
I never said to abandon the existing abstraction mechanisms -- quite the opposite: type functions would be members of classes just like type aliases are, and can partake in the whole abstraction game.
 
Macros are first-class member of the language, one can import/unimport them, use mixin composition, etc. What would type functions offer the programmers in this regard?
The domain of discourse of macros is outside of the language, it's the implementation of the language -- that's my point!
You're "scripting the compiler". It may be acceptable to construct trees for values that way, but it's much too low-level when it comes to expressing types.
Type functions would be part of the object level (i.e., strictly reasoning inside the type system), whereas macros are meta: they reason about/manipulate the type system.

I don't want to derail the (def) macro work with my dreaming of type functions. We're not ready for them yet.
More research is needed. I unfortunately can't afford to work on that right now.

More importantly, we have enough on our plates stabilizing the macro features we want to support. 

Let's get the value level right first before we turn to types.
We already have implicits and type inference to battle there.

I also am strongly convinced a lot of type-level hacking can be done with the features as they are planned. Necessity is the mother of invention.
I was also happy to hear positive feedback from Miles. 

Eugene Burmako

unread,
May 31, 2013, 6:14:55 PM5/31/13
to <scala-internals@googlegroups.com>, Adriaan Moors, Miles Sabin
1) I rather meant to ask how one would write complex type functions. What abstraction mechanisms would it be possible to use e.g. to remove duplication from two similar type functions?

2) Being meta- or macro- has its benefits. We can use familiar Scala code (probably restricted in a way) and use familiar abstractions (the reflection API) to solve a whole range of problems, whereas tools given to a DSL programmer will most likely be inapplicable in other domains.

3a) Speaking of being low-level. Would you name this type macro low-level? Why?

class Set[T] {
  def map[U](f: T => U): Result[U]
  type Result[NewEl] = NewEl match {
    case tq"Boolean" => tq"BitSet"
    case _ => tq"Set[$NewEl]"
  }
}

3b) I've been looking through the Omega manual trying to find out another good example to discuss, and then I stumbled upon this type function:

plus :: Nat ~> Nat ~> Nat
{plus Z y} = y
{plus (S x) y} = S{plus x y}

An interesting thing here is that we need extended kinds to support this kind of signature. If we do a proposed DSL for type-level programming, we would need to come with a special syntax for that, and then we'd thread support for such kinds through the type checker. With macros, we can reuse a pre-existing language mechanism:

macro type plus[M: Nat, N: Nat] = tq"$M" match {
  case tq"Z" => tq"N"
  case tq"S[$M]" => tq"S[plus[$M, $N]]"
}

4) Also let's not jump into conclusions solely based on ideology. It sounds nice to claim that macros are low-level, are too rough to touch such sophisticated beings as types, don't deal with the problems at the root, etc, but let's examine the situation concretely using examples. If macros are indeed ill-suited for the tasks at hand, you know me, I'll be the most fervent zealot (though not after first trying to make macros better), but let's not get ahead of ourselves...


--

Rich Oliver

unread,
Jun 2, 2013, 6:19:06 PM6/2/13
to scala-i...@googlegroups.com
On Friday, May 31, 2013 7:47:44 PM UTC+1, Miles Sabin wrote:
FTR, I'm largely in agreement with the above (although I wouldn't be
quite so disparaging about Prolog ;-)

My uses of macros in shapeless are largely workarounds for features
missing in the language and its type system, and I would much rather
see the problems dealt with at source.

Miles

I'm so glad that to hear people (and particular people far more expert than me) are thinking that way. My great concern with Macro's was that they would become a substitute for providing features at the language level. More generally my concern has been that Scala had got a bad case of syntactic anorexia.

Roland Kuhn

unread,
Jun 3, 2013, 1:31:31 AM6/3/13
to scala-i...@googlegroups.com
31 maj 2013 kl. 23:35 skrev Adriaan Moors:


On Fri, May 31, 2013 at 1:03 PM, Eugene Burmako <eugene....@epfl.ch> wrote:
Speaking of Scala types + pattern matching on types, why abandon the abstraction mechanisms provided by Scala and reduce them to just functions? What if I would like to modularize my type functions in OO-style? How do I do that?
I never said to abandon the existing abstraction mechanisms -- quite the opposite: type functions would be members of classes just like type aliases are, and can partake in the whole abstraction game.
 
Macros are first-class member of the language, one can import/unimport them, use mixin composition, etc. What would type functions offer the programmers in this regard?
The domain of discourse of macros is outside of the language, it's the implementation of the language -- that's my point!
You're "scripting the compiler". It may be acceptable to construct trees for values that way, but it's much too low-level when it comes to expressing types.
Type functions would be part of the object level (i.e., strictly reasoning inside the type system), whereas macros are meta: they reason about/manipulate the type system.

The prime reason for turning to macros when implementing Typed Channels was error reporting, it would be awesome if you could weave that into your dream and make it work (i.e. not leading to completely unmaintainable implementation while also not exposing the user to page-long and/or meaningless type errors). I guess what this boils down to is introducing non-termination explicitly into your type functions.

Regards,

Roland

I don't want to derail the (def) macro work with my dreaming of type functions. We're not ready for them yet.
More research is needed. I unfortunately can't afford to work on that right now.

More importantly, we have enough on our plates stabilizing the macro features we want to support. 

Let's get the value level right first before we turn to types.
We already have implicits and type inference to battle there.

I also am strongly convinced a lot of type-level hacking can be done with the features as they are planned. Necessity is the mother of invention.
I was also happy to hear positive feedback from Miles. 



--
See you at Scala Days (June 10–12, NYC)!

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



Dr. Roland Kuhn
Akka Tech Lead
Typesafe – Empowering professional developers to build amazing apps.
twitter: @rolandkuhn

See you at Scala Days 2013 in NYC!
June 10th - June 12th
www.scaladays.org

Ben Hutchison

unread,
Jun 3, 2013, 3:36:09 AM6/3/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
On Sat, Jun 1, 2013 at 8:14 AM, Eugene Burmako <eugene....@epfl.ch> wrote:
> 4) Also let's not jump into conclusions solely based on ideology. It sounds
> nice to claim that macros are low-level, are too rough to touch such
> sophisticated beings as types, don't deal with the problems at the root,
> etc, but let's examine the situation concretely using examples.

Hear, hear!

This seems to be a debate between two alternatives, one of which is
implemented (in early form), and one of which isn't. Unimplemented
software invariably has fewer warts and more potential capabilities,
than implemented software. I worry that if type macros get sidelined,
we'll be waiting ~3 years until the alternative solution is ready for
applied use. "Perfect is the enemy of good" etc..

And I note it is the guy with the working implementation who is
putting up concrete use cases and examples for analysis..

-Ben

John Sullivan

unread,
Jun 3, 2013, 8:40:37 AM6/3/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
This seems to be a debate between two alternatives, one of which is
implemented (in early form), and one of which isn't. Unimplemented
software invariably has fewer warts and more potential capabilities,
than implemented software. I worry that if type macros get sidelined,
we'll be waiting ~3 years until the alternative solution is ready for
applied use.  "Perfect is the enemy of good" etc..

"Works for me" might be good enough when writing a script or a small program. Here we are dealing with a major extension of a complex compiler and putting out a public API. Public APIs need to be designed with care, because once you remove the "experimental" tag, you are stuck with what you've got. (Actually even before that, as can be seen by my own reaction earlier on this thread, where I showed out that I was already expecting an experimental feature that is only on a branch to make it to 2.11 or 2.12.) And a complex compiler should only be extended in maintainable and tractable ways. I would take into consideration that many of the people that are arguing for taking a step back and reconsidering how the API should work are the same people that maintain the compiler.

Adriaan Moors

unread,
Jun 3, 2013, 11:20:48 AM6/3/13
to John Sullivan, scala-i...@googlegroups.com, Miles Sabin
Indeed. I'd like to emphasize that we're not arguing implementation but design.

I don't want a design that imperatively calls into the type system and exercises its dark corners.
We already have implicits and type inference for that (to a lesser extent, but still -- can you predict how a given set of implicits will work?
This problem will be an order of magnitude worse with type macros).

I want something that users can understand, with an elegant interface that allows us to evolve the implementation.

Scala is 10 years old now. Type macros may sound exciting to you right now, but how about in another 10 years when you have to debug huge applications that rely on all the warts of the compiler that we haven't been able to evolve because people were relying on all its idiosyncrasies?

The same considerations apply for def macros, by the way, which is why we're keeping them experimental until we can figure out a better (more restricted) interface to the compiler.

Adriaan Moors

unread,
Jun 3, 2013, 11:22:41 AM6/3/13
to scala-i...@googlegroups.com
On Sun, Jun 2, 2013 at 10:31 PM, Roland Kuhn <goo...@rkuhn.info> wrote:
The prime reason for turning to macros when implementing Typed Channels was error reporting,
I'd love to understand the approach and how it could be implemented using (whitebox) def macros.
 
I guess what this boils down to is introducing non-termination explicitly into your type functions.
I don't see why that is. A type function must terminate (because it must be decidable), but it can do so with an error.

Eugene Burmako

unread,
Jun 3, 2013, 11:35:33 AM6/3/13
to <scala-internals@googlegroups.com>, John Sullivan, Miles Sabin, Adriaan Moors
As I mentioned before, type macros don't have to "imperatively call into the type system and exercise its dark corners" and "rely on all the warts of the compiler". Indeed, currently they do, but this isn't set in stone, as I elaborated a couple of times in this thread. I suggest we directly discuss the examples I provided in the very previous message (which by the way are about the design) instead of making more judgements :(


--

Roland Kuhn

unread,
Jun 3, 2013, 11:49:54 AM6/3/13
to scala-i...@googlegroups.com
Hi Adriaan,

3 jun 2013 kl. 17:22 skrev Adriaan Moors:

On Sun, Jun 2, 2013 at 10:31 PM, Roland Kuhn <goo...@rkuhn.info> wrote:
The prime reason for turning to macros when implementing Typed Channels was error reporting,
I'd love to understand the approach and how it could be implemented using (whitebox) def macros.

I’m not sure what you are asking here (and I’m also not sure what you mean with whitebox in this context); the code is in the akka-channels package and it is quite obvious: it takes type parameters, does some straight-forward calculations—which shows the beauty of operating on these types using normal Scala code instead of wrapping myself around type projections and implicits—and then it returns a tree whose main change is that the result type is a lot more precise than what was known before macro evaluation. Along the way several very different things can go wrong, and my point was that failure in a sub-expression of that type calculation must be able to generate a visible error message instead of just bailing out at the top-level. My counter-example is @ImplicitNotFound, which gives absolutely no indication what was causing the error in any but the most trivial circumstances.

 
I guess what this boils down to is introducing non-termination explicitly into your type functions.
I don't see why that is. A type function must terminate (because it must be decidable), but it can do so with an error.

As long as that is true for sub-expressions all is good. Either by way of an “exception” mechanism or by including error types implicitly as members of all valid types. A type function calling another type function must then be able to act upon returned failures (i.e. it gets to decide whether to try another thing or bail out). My experience with decidable type systems is non-existent, so I don’t know whether these constraints are contradictory or not.

Regards,

Roland





--
See you at Scala Days (June 10–12, NYC)!

--
You received this message because you are subscribed to the Google Groups "scala-internals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scala-interna...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Johannes Rudolph

unread,
Jun 3, 2013, 11:56:20 AM6/3/13
to scala-i...@googlegroups.com
On Sat, Jun 1, 2013 at 12:14 AM, Eugene Burmako <eugene....@epfl.ch> wrote:
3a) Speaking of being low-level. Would you name this type macro low-level? Why?

class Set[T] {
  def map[U](f: T => U): Result[U]
  type Result[NewEl] = NewEl match {
    case tq"Boolean" => tq"BitSet"
    case _ => tq"Set[$NewEl]"
  }
}


Because this just assembles valid types on the syntactical level (you need to type-check the type application to make sure the types (kinds?) make sense). As I understand the basic idea (and that's all I understand) type functions would operate directly on types and could maybe in some form prevent invalid types in the first place, i.e. when compiling the type function (my guess: by encoding some constraints of the target type-system (= Scala's) within the type-system of the language the type function is written in (also Scala ? or some subset) or by introducing a new (sub-)language for this purpose.

I don't know which constraints would be qualified for this kind of checking but the obvious one (also applicable to your example) would be to check if the kinds match. 

3b) I've been looking through the Omega manual trying to find out another good example to discuss, and then I stumbled upon this type function:

plus :: Nat ~> Nat ~> Nat
{plus Z y} = y
{plus (S x) y} = S{plus x y}

An interesting thing here is that we need extended kinds to support this kind of signature. If we do a proposed DSL for type-level programming, we would need to come with a special syntax for that, and then we'd thread support for such kinds through the type checker. With macros, we can reuse a pre-existing language mechanism:

macro type plus[M: Nat, N: Nat] = tq"$M" match {
  case tq"Z" => tq"N"
  case tq"S[$M]" => tq"S[plus[$M, $N]]"
}

But will that be enough? In the paper there's a lot more about how to actually type-check a program using those types. It looks to be more like a system of type equations that has to be solved using some strategies than a one-way rewriting of types (or type trees).

--
Johannes

-----------------------------------------------
Johannes Rudolph
http://virtual-void.net

Adriaan Moors

unread,
Jun 3, 2013, 1:31:49 PM6/3/13
to Eugene Burmako, <scala-internals@googlegroups.com>, John Sullivan, Miles Sabin
I agree they don't *have to*, I'm saying the problem is that they *can* manipulate the compiler directly: they shouldn't be able to if we're going to evolve the compiler.
Our experience with adding features is that people *will* exploit all possible loopholes (unintentionally, of course) if they *can*, so it's our job to come up with an interface that can't be exploited.

Adriaan Moors

unread,
Jun 3, 2013, 1:51:37 PM6/3/13
to scala-i...@googlegroups.com
Hi Roland,

The reason I'm asking for more detail is because def macros can also change the result type of the expression that they generate.
You only need type macros if you need to change the meaning of types that are unrelated to expressions (e.g., to change the parents of a class, to compute a self-type or the bound of a type parameter).

In many cases, type macros can be replaced by white box macros that synthesize type class instances. I've included an example below.

As soon as expressions are involved (e.g., a method call), you can use a whitebox def macro to influence its result type, give error messages,...

So, if you can express your design in terms of "magic" method calls that reason, influence and potentially "complain" about their types, you don't need type macros.

Whitebox macros refer to the kind of def macros that are see-through (like a white box): they can change their signature. Blackbox macros can't.
Both kinds of macros are implemented, though we will promote blackbox over whitebox.

cheers
adriaan

// a global whitebox macro that creates type class instances for the CanBuildFrom type class
implicit def universalCanBuildFrom[From: TypeTag, El: TypeTag, To: TypeTag]: CanBuildFrom[From, El, To] = macro magicImpl

// sketchy pseudo-code
// the core idea is that the result type of this macro is based on the type parameters From and El,
// and To will be inferred based on the expression this macro expands to
def magicImpl ... = (implicitly[TypeTag[From]].typeSymbol, implicitly[TypeTag[El]].typeSymbol) match {
  case (BitSetClass, IntClass) => new BitSetCanBuildFrom // : CanBuildFrom[BitSet, Int, BitSet] --> To will be inferred as BitSet
  case (BitSetClass, _) => new SetCBF[El] // : CanBuildFrom[BitSet, T, Set[T]] --> To will be inferred as Set[T]
  ...
}

Eugene Burmako

unread,
Jun 3, 2013, 1:53:52 PM6/3/13
to Adriaan Moors, <scala-internals@googlegroups.com>, John Sullivan, Miles Sabin, Paul Phillips, Johannes Rudolph
@Adriaan. I meant that we can restrict some (or all) macros to not have a possibility to manipulate the compiler directly. Note that there's no Context in my example above.

Like we discussed on one of the previous meetings, macros represented in a subset of Scala can be made interpretable. The same might go for type macros. For instance, if they adhere to some special rules, they can be used for type computations. If not, we can either prohibit them or have our own flavor of -XUndecidableInstances.

@Johannes. My bad, I forgot that the tq"..." notation is currently used for quasiquotes that produce trees parsed in type context, so the code in that example didn't represent my original intent. Let's replace all tq"..." thingies with t"..." interpolators that construct and deconstruct types. Quasiquotes don't have to be implemented by dumb parsing of the quoted code, they can do much more (including hygiene, creation of types and so on).

The remark about type constraints is a very good one. Currently we're in a quest of finding the best way for macros to interact with type inference. One of the possible approaches here is to allow macros to append constraints to type inferencer equations, as they do in Nemerle.

One-way rewriting of types!!! That's it, that's exactly what I dislike in the current design of type macros. This drastically reduces the usefulness of type macros, since it becomes impossible to write "def map[U](f: T => U): Result[U]", as e.g. in example #3 (because Result[U] would want to expand immediately, which won't give us anything, since U here is just a type parameter). I didn't want to get into details before, since we were still struggling with basics, but now when you mention it, I can't agree more.

As you can see, I'm not defending textual abstraction as a way to go with type-level computations. I'm rather proposing that we approach this problem from a metaprogramming standpoint - not "how do we hardcode a new language feature into the compiler?", but rather "how do we expose the necessary tools for the programmer (in a controllable manner!) to achieve the same effect with macros?". It is quite debatable whether this approach is universally applicable to programming language design, but in this particular case, given our positive experience with simplifying type-level things with macros, I think it is worth a try.

Adriaan Moors

unread,
Jun 3, 2013, 2:15:51 PM6/3/13
to Eugene Burmako, <scala-internals@googlegroups.com>, John Sullivan, Miles Sabin, Paul Phillips, Johannes Rudolph
Regarding defending meta-programming over just programming.

Whenever there's a direct and an indirect way to express something, which way do you go?

// sketch of a direct way to replace CanBuildFrom with type functions
class Collection[El] {
  type ResultCollection[NewEl] = Collection[NewEl]
}

class BitSet extends Collection[Int] {
  override type ResultCollection[NewEl] = NewEl match {
    case Int => BitSet
    case T => Set[T]
  }
}

This is a basic example of the kind of computation I'm talking about. Note that I don't know how to spec this fully, nor do I think this is a priority right now.

I do think we're more likely to be able to spec and maintain something like this (since PL design is our bread and butter),
than to figure out a way to enforce the same kind of consistency when the field is wide open.
When your program is constructed by a sequence of method calls on an API, the problem space sprouts a lot more variables.

You'll need something like SpecSharp to express complicated contracts to get back to the same level of safety of the direct approach of specifying a programming language,
and you still won't have reached the same level of abstraction.

More concretely, the type checker will interpret this direct (type-level) program according to the spec.
The API is very limited: there's syntax and semantics. Spec'ed and enforced by the implementation. We're used to this kind of thing.
When you expose the compiler API to do the same thing, many more low-level concerns suddenly become relevant.

For example, in which order does the macro call methods on the context? What of multiple threads?
We don't have a way to statically check the sequence of method calls on an API,
but we do know how to do this for a programming language with parsing and context-sensitive algorithms akin to type checking.






Adriaan Moors

unread,
Jun 3, 2013, 2:19:40 PM6/3/13
to Eugene Burmako, <scala-internals@googlegroups.com>, John Sullivan, Miles Sabin, Paul Phillips, Johannes Rudolph
One more time, more succinctly, and then I'm going to have to spend some time on other stuff.

Going from a normal program at the value level to a macro is one way of "going meta": you're programming reflectively on the value-level. You've shifted a stage in time.
Going from the value level to the type level is another dimension of meta, but it has "real programming" rather than "reflective programming". You're still in the same stage as the base-level: both happen at compile time.

Eugene Burmako

unread,
Jun 3, 2013, 2:32:56 PM6/3/13
to Adriaan Moors, <scala-internals@googlegroups.com>, John Sullivan, Miles Sabin, Paul Phillips, Johannes Rudolph
Thank you!

Whereas macros can be simplified to reach almost the same level of conciseness as in your example, they will still have to use quasiquoting of sorts to wrap things like "Int" or "BitSet". That's indeed the direct consequence of using indirection here. Nevertheless, it's not a one-way street - indirection also has advantanges that I've outlined elsewhere, which is why I'm hopeful for metaprogramming in general.

As for the macro API and the guarantees we want to provide, I agree that your concerns are very valid. I don't think I have anything to add, except for summarizing what has been said previously. It doesn't have to be as bad as what we're having right now (or even maybe it can be very good). The details have to be worked out in the future.

Roland Kuhn

unread,
Jun 3, 2013, 3:56:50 PM6/3/13
to scala-i...@googlegroups.com
Hi Adriaan,

3 jun 2013 kl. 19:51 skrev Adriaan Moors:

The reason I'm asking for more detail is because def macros can also change the result type of the expression that they generate.
You only need type macros if you need to change the meaning of types that are unrelated to expressions (e.g., to change the parents of a class, to compute a self-type or the bound of a type parameter).

In many cases, type macros can be replaced by white box macros that synthesize type class instances. I've included an example below.

As soon as expressions are involved (e.g., a method call), you can use a whitebox def macro to influence its result type, give error messages,...

So, if you can express your design in terms of "magic" method calls that reason, influence and potentially "complain" about their types, you don't need type macros.

Yes, I am aware of all this, since I am using those white-box def macros already. The reason I chimed in is that I’d rather replace those white-box macros—which make life a lot harder for IDEs and probably don’t work in IntelliJ anyway—by the type calculations you talked about, I’m with Miles on this one. But in order to do that we would need to have a solution for the error reporting problem.

Whitebox macros refer to the kind of def macros that are see-through (like a white box): they can change their signature. Blackbox macros can't.
Both kinds of macros are implemented, though we will promote blackbox over whitebox.

Did I understand correctly that white-box macros will be part of 2.11 for sure?

Regards,

Roland

Adriaan Moors

unread,
Jun 3, 2013, 4:45:00 PM6/3/13
to scala-i...@googlegroups.com
 
So, if you can express your design in terms of "magic" method calls that reason, influence and potentially "complain" about their types, you don't need type macros.

Yes, I am aware of all this, since I am using those white-box def macros already. The reason I chimed in is that I’d rather replace those white-box macros—which make life a lot harder for IDEs and probably don’t work in IntelliJ anyway—by the type calculations you talked about, I’m with Miles on this one. But in order to do that we would need to have a solution for the error reporting problem.
Great! I'd implement types that signal errors just like on the value level:

def error(message: String) = <calls into compiler api>
becomes
type Error[Message <: String with Singleton] = Nothing

error("Not supported")
becomes
Error["Not supported".type]

When type-level computation ends with this type, the compiler can signal an error accordingly.
 
 
Whitebox macros refer to the kind of def macros that are see-through (like a white box): they can change their signature. Blackbox macros can't.
Both kinds of macros are implemented, though we will promote blackbox over whitebox.

Did I understand correctly that white-box macros will be part of 2.11 for sure?
They'll be part of the experimental part of 2.11. Please see my first message in this thread for details.


Eugene Burmako

unread,
Jun 3, 2013, 4:49:11 PM6/3/13
to <scala-internals@googlegroups.com>
@Roland. Could you elaborate on IDE problems caused by your macros?

@Adriaan. What if some extra payload that identifies an error needs to be carried over? E.g. in Roland's case of ping-pong analysis that would be not only the name of the unsupported message, but also a sequence of messages that led to that message.

Roland Kuhn

unread,
Jun 3, 2013, 4:55:57 PM6/3/13
to scala-i...@googlegroups.com
3 jun 2013 kl. 22:49 skrev Eugene Burmako:

@Roland. Could you elaborate on IDE problems caused by your macros?

They work fine in Eclipse, but AFAIK IntelliJ does not use the presentation compiler, so it won’t expand the macros, which means that the program will not type-check (because the return type is e.g. Future[_] instead of the “real” type).


@Adriaan. What if some extra payload that identifies an error needs to be carried over? E.g. in Roland's case of ping-pong analysis that would be not only the name of the unsupported message, but also a sequence of messages that led to that message.

I guess that would work by using recursion (which would be necessary for my use-case for exactly the ping-pong analysis) and using type pattern matching to propagate the error back out.

Erik Osheim

unread,
May 31, 2013, 4:28:01 PM5/31/13
to scala-i...@googlegroups.com
On Fri, May 31, 2013 at 03:26:48PM -0400, Rex Kerr wrote:
> I'm not sure it's documented anywhere. I know that Erik and Tom have got
> piles of plus(times(a,5),b) stuff in Spire specifically because they felt
> it was too difficult to break apart the use sites and the macro creation
> sites. (That sort of stuff is very hard for me to parse, which really
> limits my ability/desire to contribute to Spire.)

The thing that I have struggled with (possibly just due to not being
smart enough about macros) is the following situation:

1. the trait Foo would like to use the macro bar
2. the macro bar needs to know about the Foo type for its implementation

In this case you can't easily split the macros from the traits.

Fortunately, I think in Spire we've finally managed to get around this
(by writing generic Ops macros that don't care about the type class
trait used). I'm hoping to get some commits pushed today or tomorow
that should make things like Complex.scala a lot more readable!

That said, I think it's easy to find yourself in the circular macro
box.

-- Erik

Eugene Burmako

unread,
Jun 5, 2013, 6:56:25 AM6/5/13
to scala-internals
Why is single-parametric materialization experimental? It can be
perfectly well expressed with blackbox macros.

On May 30, 9:47 pm, Adriaan Moors <adriaan.mo...@typesafe.com> wrote:
> Hi John,
>
> Good timing! We're currently finalizing our discussions around macros to
> present our macro roadmap at Scaladays. We're not ready to broadcast this
> to a wider audience, but I'm happy to hear scala-internals' thoughts.
>
> For now, the short answer is that type macros will stay in paradise a bit
> longer. We're not ready to include them in 2.11.
>
> Here's an outline of our current thinking. Macros and reflection will
> likely stay experimental in 2.11, but we'll distinguish two categories:
>  (1) features we're planning to refine and support in 2.12
>  (2) features that may disappear at any time (usually we don't introduce
> experimental features that we're not planning to stabilize, but this being
> such a hard design challenge, we're going to make an exception).
>
> Concretely:
>
> Features in 2.11 that we encourage (likely still labelled experimental, but
> looking good for 2.12):
> * blackbox macros (behave exactly like methods from a code understanding
> point of view)
> * quasiquotes (a nicer way to write trees)
> * macro bundles
> * relaxed signatures of macro impls
>
> Truly experimental 2.11 features (may disappear in 2.12, or they may stay):
> * single-parametric type class materialization (see
> #2494<https://github.com/scala/scala/pull/2494>
> )
> * whitebox macros
> * macro annotations (useful for pickling, slick, enums,...?)
> (* not really a feature, but you can always cast down to reflect.internal
> types and get at the full compiler API, though we don't make any guarantees)
>
> In paradise (Eugene's fork): not supported (by descending likelihood of
> making it to master):
> * multi-parametric type class materialization aka fundep materialization
> (Iso's, extracting the essence of whitebox macros)
> * introduceTopLevel -- currently in master, but I'd like to get it out (can
> always cast)
> * untyped macros (https://github.com/aztek/scala-workflow)
> * type macros
>
> cheers
> adriaan
>
> On Thu, May 30, 2013 at 10:09 AM, John Sullivan <
>
>
>
>
>
>
>
>
>
> john.sullivan.m...@gmail.com> wrote:
> > Good morning!
>
> > Any chance type macros are going to make it into Scala 2.11.0? I didn't
> > see any mention of them in the 2.11.0-M3 release notes [1]. And my type
> > macro project does not compile against M3 release ("macro is now a
> > reserved word; usage as an identifier is deprecated").
>
> > Thanks! -John
>
> > [1]http://www.scala-lang.org/2.11.0-M3
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "scala-internals" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to scala-interna...@googlegroups.com.
> > For more options, visithttps://groups.google.com/groups/opt_out.
>
> --
> See you at Scala Days <http://scaladays.org> (June 10–12, NYC)!

Eugene Burmako

unread,
Jun 6, 2013, 4:56:03 AM6/6/13
to scala-internals
Also, how come that fundep materialization is paradise? Last time we
discussed it, it was looking good for trunk. I understand that I need
to address the comments before merging it, so it's like annotations -
maybe we'll make it in time and maybe not, but why shelf this to
paradise? Is there something that I don't know?

Adriaan Moors

unread,
Jun 6, 2013, 6:05:29 PM6/6/13
to scala-i...@googlegroups.com
Yep, fundep materialization can move up the ladder as far as I'm concerned.


For more options, visit https://groups.google.com/groups/opt_out.





--
See you at Scala Days (June 10–12, NYC)!

Paolo G. Giarrusso

unread,
Jun 7, 2013, 12:14:05 AM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin


On Saturday, June 1, 2013 12:14:55 AM UTC+2, Eugene Burmako wrote:
1) I rather meant to ask how one would write complex type functions. What abstraction mechanisms would it be possible to use e.g. to remove duplication from two similar type functions?

2) Being meta- or macro- has its benefits. We can use familiar Scala code (probably restricted in a way) and use familiar abstractions (the reflection API) to solve a whole range of problems, whereas tools given to a DSL programmer will most likely be inapplicable in other domains.

3a) Speaking of being low-level. Would you name this type macro low-level? Why?

class Set[T] {
  def map[U](f: T => U): Result[U]
  type Result[NewEl] = NewEl match {
    case tq"Boolean" => tq"BitSet"
    case _ => tq"Set[$NewEl]"
  }
}

3b) I've been looking through the Omega manual trying to find out another good example to discuss, and then I stumbled upon this type function:

plus :: Nat ~> Nat ~> Nat
{plus Z y} = y
{plus (S x) y} = S{plus x y}

An interesting thing here is that we need extended kinds to support this kind of signature. If we do a proposed DSL for type-level programming, we would need to come with a special syntax for that, and then we'd thread support for such kinds through the type checker. With macros, we can reuse a pre-existing language mechanism:

macro type plus[M: Nat, N: Nat] = tq"$M" match {
  case tq"Z" => tq"N"
  case tq"S[$M]" => tq"S[plus[$M, $N]]"
}
 
4) Also let's not jump into conclusions solely based on ideology. It sounds nice to claim that macros are low-level, are too rough to touch such sophisticated beings as types, don't deal with the problems at the root, etc, but let's examine the situation concretely using examples. If macros are indeed ill-suited for the tasks at hand, you know me, I'll be the most fervent zealot (though not after first trying to make macros better), but let's not get ahead of ourselves...

Eugene, I don't think "conclusions based on ideology" is the most appropriate description.
While type-level functions are not directly supported in Scala, beyond System F omega itself there's a dozen of languages which have them (dependently-typed languages based on Curry-Howard), supported by decades of research, so there's vastly much more experience in the field; I happen to be familiar with Agda and have written a bit of Coq. I know zero languages combining them with subtyping, but somebody (Martin?) mentioned a technical report on the issue.

Moreover, type-level functions can already be encoded. Sum is encoded at this page: http://apocalisp.wordpress.com/2010/06/21/type-level-programming-in-scala-part-4d-peano-arithmetic/, though you'll need the previous pages to understand that definition.

To show some problems where type macros seem unsuitable, I'll show that often the typechecker needs not only to run type functions, but also to do equational reasoning with them.

The examples in this thread are simple type transformers, even though they surely have their applications, but are too simple for my example. I'll take an example which actually uses type-level addition, show a reasonable way of supporting it (the approach used in Agda as far as I understand), and this might give you an idea of the requirements which a type-level sum must satisfy.

However, take an encoding of type-level numerals in Scala. Here's a simple encoding, translated from some variant of C#:

trait Nat
class Zero extends Nat
class Succ[T <: Nat] extends Nat

The standard application, in the literature, is to use those to build sized lists — I'd bet that's also doable in Omega, since the stated goal is to support similar examples. In the following, L is the list length.

trait List[+A, L] {
def head[K <: Nat](implicit p: L =::= Succ[K]): A
def tail[K <: Nat](implicit p: L =::= Succ[K]): List[A, K]
 
def map[B](f: A => B): List[B, L]
def zip[B](that: List[B, L]): List[(A, B), L]
}

The rest is at https://gist.github.com/Blaisorblade/5726830#file-typesasvalues-scala-L44.
Suppose you also have type-level sum, defined as the equivalent of the standard definition on Peano numerals:

Sum Z n = n
Sum (S m) n = S (Sum m n)

Now you can write the type for list concatenation:

trait List[+A, L] {
...
  def append[K <: Nat](l: List[A, K]): List[A, Sum[L, K]]
}

Now, consider typechecking `Nil zip append(v1 :: l1, l2)`, where l1 has length M and l2 length N, both unknown.
To typecheck the call to zip, the typechecker needs then to solve Z = Sum (S M) N, and it must prove that unification is impossible (either directly or with programmer help). Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.

Think of doing this with type macros: either the language of macros is the same as what I described (but then, why should they be called macros and not functions?), or the macro writer needs to give a sensible result (execute part of the addition) even on incomplete types.

Another example, which is not directly about type-level functions but arises in relation to GADTs, is the type refinement needed to program in this style.

BTW, note that in Scala Nat is not a kind but a type, so above we write K <: Nat, not K: Nat. Unlike Omega, we don't need extensible kinds, types themselves and subtyping are enough. Adriaan, that made me wonder a while ago: can DeLesley Hutchins's Pure Subtype Systems be embedded in Scala, at the value/type level?

Best,
Paolo

Rex Kerr

unread,
Jun 7, 2013, 1:20:26 AM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
On Fri, Jun 7, 2013 at 12:14 AM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:

To show some problems where type macros seem unsuitable, I'll show that often the typechecker needs not only to run type functions, but also to do equational reasoning with them.

The examples in this thread are simple type transformers, even though they surely have their applications, but are too simple for my example. I'll take an example which actually uses type-level addition, show a reasonable way of supporting it (the approach used in Agda as far as I understand), and this might give you an idea of the requirements which a type-level sum must satisfy.

However, take an encoding of type-level numerals in Scala. Here's a simple encoding, translated from some variant of C#:

trait Nat
class Zero extends Nat
class Succ[T <: Nat] extends Nat

The standard application, in the literature, is to use those to build sized lists — I'd bet that's also doable in Omega, since the stated goal is to support similar examples. In the following, L is the list length.

trait List[+A, L] {
def head[K <: Nat](implicit p: L =::= Succ[K]): A
def tail[K <: Nat](implicit p: L =::= Succ[K]): List[A, K]
 
def map[B](f: A => B): List[B, L]
def zip[B](that: List[B, L]): List[(A, B), L]
}

The rest is at https://gist.github.com/Blaisorblade/5726830#file-typesasvalues-scala-L44.
Suppose you also have type-level sum, defined as the equivalent of the standard definition on Peano numerals:

Sum Z n = n
Sum (S m) n = S (Sum m n)

Now you can write the type for list concatenation:

trait List[+A, L] {
...
  def append[K <: Nat](l: List[A, K]): List[A, Sum[L, K]]
}

Now, consider typechecking `Nil zip append(v1 :: l1, l2)`, where l1 has length M and l2 length N, both unknown.
To typecheck the call to zip, the typechecker needs then to solve Z = Sum (S M) N, and it must prove that unification is impossible (either directly or with programmer help). Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.

Why do you want to make your compiler understand natural numbers constructively?  This is an example to me of not of the defects of type macros but of the defects of having anything less than a Turing-complete language to do type-level reasoning--and while you're at it, why not use the same language that you're writing everything else in?

If you're writing your language to prove theorems, then it makes sense to have your type system prove things by induction etc..  If you're writing your language to do anything else, though, this seems like not the pragmatic way to go.  Instead, inventing notation as I go,

  trait List[+A, L \ Int {L >= 0}] {
    def head[{ L > 0 }]: A
    def tail[{L > 0}]: List[A, {L-1}]
    def zip[B](that: List[B,L]): List[(A,B), L]
    def append[K \ Int {K >= 0}](l: List[A,K]): List[A, {L+K}]
  }

allows/forces the programmer to specify what to do with the types.  That is, types are a first-class language construct (which evaluate mostly at compile-time save for reflection/reification).  If second-class, they should be a close second.

  trait HList[Q \ Seq[Type]] {
    def gather[R {Q contains R}]: HList[Seq[R] { Q filter { _ == R } }]
  }

should (ideally) be not just doable but easy.  It's a perfectly sensible thing to want to do, and easy to express: given something parameterized by a sequence of types, pull out all those of a particular type.  It might be nontrivial to map type computations to data computations (e.g. here, since the _types_ know how to pull out the Rs, how do you pull out the _data_ corresponding to the Rs?), but I think it's worth a great deal of thought if one is going to try to go meaningfully beyond type macros.

I'm a little wary of aiming for less than this because it seems almost-clear to me that this ought to be the eventual goal, and having type macros, equational reasoning with limited syntax, and first class types all at the same time is a recipe for...well...I'm not sure it's even a recipe.  If you need reasoning, provide a general reasoning-library and let people apply it to type-values (or whatever else they might want to reason with).

  --Rex

Paolo G. Giarrusso

unread,
Jun 7, 2013, 3:54:17 PM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
While this sounds interesting, one could just handle pairs of types and values.

, but I think it's worth a great deal of thought if one is going to try to go meaningfully beyond type macros. 
 

I'm a little wary of aiming for less than this because it seems almost-clear to me that this ought to be the eventual goal, and having type macros, equational reasoning with limited syntax, and first class types all at the same time is a recipe for...well...I'm not sure it's even a recipe.  If you need reasoning, provide a general reasoning-library and let people apply it to type-values (or whatever else they might want to reason with).

There's no "equational reasoning with limited syntax", neither does the "type system prove things by induction".
Probably my explanation confused you, and in multiple ways. To understand such systems, you should maybe try programming in one, or reading some good paper about one (maybe "Why dependent types matter" is a good choice). What I will do is write in Agda a version of your `gather` in a few lines, using pattern matching.

Instead of:


> Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.

I should have written that "Sum (S m) n -> S (Sum m n)" works as a reduction rule. In other words, Sum just does pattern matching as it would do in Haskell; the computational model is still lambda-calculus, like in every (typical) functional programming language. The difference, however, is that the type-checker needs to normalize open terms - that is, during typechecking expressions are evaluated not using call-by-value (or even call-by-need), but using full beta reduction.

In your example, you seem to try using standard numbers at the type level and have them work as first-class entities. How should the typechecker compare Sum (S m) n and S (Sum m n) when they are expressed as (m + 1) + n and (m + n) + 1? Answer: if you use builtin addition and don't add equational reasoning tools to your typechecker (somehow), you can't. `Nil zip append(v1 :: l1, l2)` wouldn't typecheck. That's why I defined naturals constructively, because that makes things **easier** from this point of view — some proofs will just be done by computing values at compile-time and comparing them, which is what you want.

However, in practice you want to automate reasoning over naturals. The most common strategy is to not have first-class types, and use a specialized reasoning tool (like an SMT solver) for typechecking programs like your list there. I think that's what they use to typecheck JavaScript using dependent types (Google "dependent types for javascript"). But there, as far as I understand, types are very much non-first-class.

It's also possible to combine some automated reasoning with first-class types.
But then, to typecheck programs using lists of types, the typechecker would need to do similar reasonings as the ones I've shown.

Paul Phillips

unread,
Jun 7, 2013, 4:05:13 PM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin

On Thu, Jun 6, 2013 at 10:20 PM, Rex Kerr <ich...@gmail.com> wrote:
Why do you want to make your compiler understand natural numbers constructively?  This is an example to me of not of the defects of type macros but of the defects of having anything less than a Turing-complete language to do type-level reasoning--and while you're at it, why not use the same language that you're writing everything else in?

In statically typed languages, types are intended to provide behavioral guarantees not based on arbitrary computation but based on (at least in principle) a restricted set of foundational elements which have been proven sound. As you describe it, we could eject the entire type system. Model compilation as a two-stage process: first compile the type system, then compile the rest of the code under the rules of the compiled type system. I like this idea as an avenue of research (hopefully in a more performant way than I just described, and with a few rules which can't be compiled away) - maybe this already exists even - but this is not what scala is.

Note that in practice most languages, definitely including scala, fail the "in principle" given above. I cite it because that is what I'm after, personally. Arbitrary type computation would represent the utter abandonment of that goal.

Paul Phillips

unread,
Jun 7, 2013, 4:16:39 PM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
On Thu, Jun 6, 2013 at 10:20 PM, Rex Kerr <ich...@gmail.com> wrote:
Why do you want to make your compiler understand natural numbers constructively?

On the off-chance someone tries to connect the text I quoted and my response to it, I add that I DON'T think we should have to make our compiler understand natural numbers constructively. We can easily teach the compiler about the properties of natural numbers in such a way that they are among the foundational elements to which I referred. I have long wished to do this. However, one's desire not to be stuck defining basic arithmetic for the benefit of a computer (oh the irony) does not imply one wishes to be subject to all the same classes of bugs within the type system as one faces down in the trenches.

Paolo G. Giarrusso

unread,
Jun 7, 2013, 4:18:22 PM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin

I'm not sure which goal you're after, but I'll assume you like a sound system.
 
Dependently-typed languages achieve the goal of "arbitrary" type computation but maintain soundness. I think their type system might even be simpler than full Scala, and much better understood, at least in its core and at least for a few important languages.

"Arbitrary" means the language available is the same as for values, and is almost Turing-complete; you can write any function which you can prove total, and a lot of recursion patterns are supported. And of course you don't have side effects in type computation - in Agda, side effects are controlled through monads.

Omega simply separates the value and the type level, so that in the value level you can still have recursion.

Paul Phillips

unread,
Jun 7, 2013, 4:20:28 PM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
On Fri, Jun 7, 2013 at 1:18 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
"Arbitrary" means the language available is the same as for values, and is almost Turing-complete; you can write any function which you can prove total, and a lot of recursion patterns are supported. And of course you don't have side effects in type computation - in Agda, side effects are controlled through monads.

Okay, but this is not how we usually define "arbitrary". 


Paolo G. Giarrusso

unread,
Jun 7, 2013, 4:28:58 PM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin


On Friday, June 7, 2013 10:16:39 PM UTC+2, Paul Phillips wrote:

On Thu, Jun 6, 2013 at 10:20 PM, Rex Kerr <ich...@gmail.com> wrote:
Why do you want to make your compiler understand natural numbers constructively?

On the off-chance someone tries to connect the text I quoted and my response to it, I add that I DON'T think we should have to make our compiler understand natural numbers constructively. We can easily teach the compiler about the properties of natural numbers in such a way that they are among the foundational elements to which I referred. I have long wished to do this.
 
If you care, this has been done in Dependent ML and ATS, and it's rather useful. The only annoyance is that this is not extensible, so it's cooler if those principles can be added in a library where the typechecker forces you to produce correct proofs, as in Agda and Coq. But you need your typechecker to be usable as a theorem prover, and that's not something which I suggest adding to Scala. (Ignoring the fact that such a typechecker should be more robust than Scala's one).

On Friday, June 7, 2013 10:20:28 PM UTC+2, Paul Phillips wrote:

On Fri, Jun 7, 2013 at 1:18 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:
"Arbitrary" means the language available is the same as for values, and is almost Turing-complete; you can write any function which you can prove total, and a lot of recursion patterns are supported. And of course you don't have side effects in type computation - in Agda, side effects are controlled through monads.

Okay, but this is not how we usually define "arbitrary". 
 
Right. What I meant is that for me that's a reasonably powerful kind of type computation; you can even allow possibly non-terminating functions at the type level if you really want. But allowing, say, I/O, would be confusing.

Paolo G. Giarrusso

unread,
Jun 7, 2013, 4:51:27 PM6/7/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin


On Friday, June 7, 2013 7:20:26 AM UTC+2, Rex Kerr wrote:
My solution in Agda is available at https://gist.github.com/Blaisorblade/5732172. The implementation of gather is just 4 lines:

filterTypes : {Γ Γ′} MyList Γ Γ contains Γ′ MyList Γ′
filterTypes nil empty = nil
filterTypes (cons v vs) (dropFirst contain) = filterTypes vs contain
filterTypes (cons v vs) (keepFirst contain) = cons v (filterTypes vs contain)

The rest is just the definition of "contains" and of a typed list, and a lot of comments.

Note that the computation is driven by the proof of the containment relation; on the other hand, the type is so precise that there is, I guesstimate, a single implementation with this type, which is the correct one. Moreover, I was so lucky that this code could be auto-generated by type-driven auto-completion, once I wrote the type.

However, not all is well and good: I did not reuse standard lists because they do not have the right type, so I cannot reuse operations on standard lists. Apparently, addressing this problem is a recent research topic.
Moreover, sometimes it's less easy to convince the typechecker that your function is type-correct. But unlike in Scala, if you can do the needed proofs, you can convince it — its behavior is much more predictable.

Rex Kerr

unread,
Jun 8, 2013, 2:33:07 PM6/8/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
On Fri, Jun 7, 2013 at 3:54 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:


I'm a little wary of aiming for less than this because it seems almost-clear to me that this ought to be the eventual goal, and having type macros, equational reasoning with limited syntax, and first class types all at the same time is a recipe for...well...I'm not sure it's even a recipe.  If you need reasoning, provide a general reasoning-library and let people apply it to type-values (or whatever else they might want to reason with).

There's no "equational reasoning with limited syntax", neither does the "type system prove things by induction".
Probably my explanation confused you, and in multiple ways. To understand such systems, you should maybe try programming in one, or reading some good paper about one (maybe "Why dependent types matter" is a good choice).

It's certainly true that my knowledge of languages with proof-assistant capability lags far behind my knowledge of either how to do normal (mathematical) proofs or more traditional languages.
 
What I will do is write in Agda a version of your `gather` in a few lines, using pattern matching.

Instead of:


> Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.

I should have written that "Sum (S m) n -> S (Sum m n)" works as a reduction rule. In other words, Sum just does pattern matching as it would do in Haskell; the computational model is still lambda-calculus, like in every (typical) functional programming language.

Doesn't this, unless your compiler knows how to prove things via induction, give the compiler O(n+m) work to do--with a very large constant term compared to a bulk memory copy--every time the user concats two fixed-size arrays?  That is, if n+m = a+b but n, m, a, and b are all distinct, your type checker ought to know that Sum n m is Sum a b, and if the only means of computing that is to expand to S(S(S(....S(Zero)...))), it's not exactly going to be fast.

I suppose you could argue that you don't need to worry about such things as much with lists, where the numeric S operation and the list cons are naturally related (not sure I want to say isomorphic).  But one can have big lists also.
 
The difference, however, is that the type-checker needs to normalize open terms - that is, during typechecking expressions are evaluated not using call-by-value (or even call-by-need), but using full beta reduction.

Yes, on reflection, for this to be general enough to be useful, and to be a guarantee rather than an annotation of what you expect, you'd need more symbolic reasoning than I'd anticipated.
 

In your example, you seem to try using standard numbers at the type level and have them work as first-class entities. How should the typechecker compare Sum (S m) n and S (Sum m n) when they are expressed as (m + 1) + n and (m + n) + 1? Answer: if you use builtin addition and don't add equational reasoning tools to your typechecker (somehow), you can't.

Indeed.  You could evaluate them for any particular expression with known n and m and conclude that they are the same, but you could not--without formula reduction algorithms--conclude that (m+1)+n is (m+n)+1.
 
 `Nil zip append(v1 :: l1, l2)` wouldn't typecheck. That's why I defined naturals constructively, because that makes things **easier** from this point of view — some proofs will just be done by computing values at compile-time and comparing them, which is what you want.

However, in practice you want to automate reasoning over naturals.  The most common strategy is to not have first-class types, and use a specialized reasoning tool (like an SMT solver) for typechecking programs like your list there. I think that's what they use to typecheck JavaScript using dependent types (Google "dependent types for javascript"). But there, as far as I understand, types are very much non-first-class.

It's also possible to combine some automated reasoning with first-class types.
But then, to typecheck programs using lists of types, the typechecker would need to do similar reasonings as the ones I've shown.

Agreed.

  --Rex
 

Paolo G. Giarrusso

unread,
Jun 8, 2013, 4:32:12 PM6/8/13
to scala-i...@googlegroups.com, Adriaan Moors, Miles Sabin
On Saturday, June 8, 2013 8:33:07 PM UTC+2, Rex Kerr wrote:

On Fri, Jun 7, 2013 at 3:54 PM, Paolo G. Giarrusso <p.gia...@gmail.com> wrote:


I'm a little wary of aiming for less than this because it seems almost-clear to me that this ought to be the eventual goal, and having type macros, equational reasoning with limited syntax, and first class types all at the same time is a recipe for...well...I'm not sure it's even a recipe.  If you need reasoning, provide a general reasoning-library and let people apply it to type-values (or whatever else they might want to reason with).

There's no "equational reasoning with limited syntax", neither does the "type system prove things by induction".
Probably my explanation confused you, and in multiple ways. To understand such systems, you should maybe try programming in one, or reading some good paper about one (maybe "Why dependent types matter" is a good choice).

It's certainly true that my knowledge of languages with proof-assistant capability lags far behind my knowledge of either how to do normal (mathematical) proofs or more traditional languages.

Not your fault: I wasn't being polite when I said it was my fault, and the topic isn't trivial to explain. So kudos for following! You've already gotten quite some intuition.

Moreover, these languages are much more research-y than something like Scala. They might be industrial-strength for theorem proving, but probably not yet so for writing actual programs in industry. In particular, typechecking speed makes Scalac look very fast. 

However, a well-known research wrote a certified C compiler, CompCert, using Coq (a more mature system, though slightly less convenient for programming). He's famous for doing that, though.

And Agda's support for type debugging is amazing: I miss that while I program Haskell, and probably that'd be useful in Scala too. In particular, you can start from a type signature and fill in details outside in while ensuring that what you type is type-wise plausible; if the types force a specific step, the system can do that step for you.

If however you give up on having *exactly* the same language for value and type-level, then this kind of technology is much closer to being actually usable *today*, and you can try it with Omega as far as I understand (and many ideas are even supported in Haskell with a few recent extensions).

What I will do is write in Agda a version of your `gather` in a few lines, using pattern matching.

Instead of:


> Even though M and N are unknown, normalization can then apply the definition Sum (S m) n = S (Sum m n) as a rule for equational reasoning (or as a reduction rule, left to right) and produce Z = S (Sum M) N, which is clearly impossible because Z and S are different constructors.

I should have written that "Sum (S m) n -> S (Sum m n)" works as a reduction rule. In other words, Sum just does pattern matching as it would do in Haskell; the computational model is still lambda-calculus, like in every (typical) functional programming language.

Doesn't this, unless your compiler knows how to prove things via induction, give the compiler O(n+m) work to do--with a very large constant term compared to a bulk memory copy--every time the user concats two fixed-size arrays?

That is, if n+m = a+b but n, m, a, and b are all distinct, your type checker ought to know that Sum n m is Sum a b, and if the only means of computing that is to expand to S(S(S(....S(Zero)...))), it's not exactly going to be fast.
 
Indeed, if the typechecker needs to prove n + m = a + b but n, m, a, b are all distinct and concrete values, it might end up doing the reduction you describe. If that's a concern, you can define inductively also binary numbers, whose representation takes logarithmic space. The constant factor is still annoying, the complexity less so. If that's a concern, you can define binary naturals inductively, and they take logarithmic space. But yes, the constant factors are still there.

Most of the time, instead, you have similar situation but n, m, a and b are not constants, so n + m is a normal form. In that case, you need to supply a proof for your program to typecheck; if n, m, a and b are arguments to a function, then you also need to take as an argument a proof of n + m = a + b.

Naturals in practice get preferential treatment in many systems (though not in the one I'm discussing), but that's not as easy for other datatypes (say lists). So I discuss naturals mainly to discuss datatypes in general in the simplest setting possible.

Proofs by induction can be done, but they are typically done by the user (or the standard library, in the case of integers), and are written using recursion. (Example below). The automatic solvers for naturals will use machine integers where appropriate, and other systems, like Coq, have such solvers.
Moreover, if you want to run your program, you can generate, for instance, Haskell code using machine integers for speed, so that runtime performance is better.

Example proof: we'll look at proving n + 0 = n. First, an informal induction proof in Peano arithmetic, which explains how this proceeds in Agda. Then, I show the three lines which do the same.

Informal proof: To prove that n + 0 = n, you do case distinction on n (by pattern matching): if n is zero, the two sides reduce to the same thing (0+0 reduces to 0), so you just need to supply a proof of 0 = 0, that is (in such a system) a term of type 0 = 0. You can just invoke the reflexivity lemma (refl).
If n is instead S n', the two sides reduce to S (n' + zero) = S n', and you just need to invoke the lemma recursively on n' to get a proof that n' + zero = n'; by congruence (cong), you can apply S to both sides and turn this into a proof that S (n' + zero) = S n'.

The amazing thing is that the above paragraph takes only three lines of code, and once you know the language *and* you have access to the IDE and its amazing type debugger, they are even easy. If you know Haskell syntax, you're mostly good to go for this:

lem-plus-Z : (n : Nat) → n + Z ≡ n
lem-plus-Z Z = refl
lem-plus-Z (S n) = cong S (lem-plus-Z n)

That works given the definitions (I could have used the standard library):

data Nat : Set where //This is GADT syntax
  Z : Nat
  S  : Nat → Nat

// _+_ is a mixfix operator; underscores say where arguments should go.
_+_ : Nat → Nat → Nat
Z  + n = n
S m + n = S (m + n)

open import Relation.Binary.PropositionalEquality //Import the library defining equality, ≡

I suppose you could argue that you don't need to worry about such things as much with lists, where the numeric S operation and the list cons are naturally related (not sure I want to say isomorphic).  But one can have big lists also.
 
Right, S and cons are actually related: lists are decorated naturals. There's a paper implementing this observation to allow software reuse ("Ornamental Algebras, Algebraic Ornaments"), but it's not for the faint of heart (I don't fully get it either).

Best,
Paolo

Adriaan Moors

unread,
Jun 12, 2013, 5:00:20 PM6/12/13
to Rex Kerr, scala-i...@googlegroups.com, Miles Sabin

There's no "equational reasoning with limited syntax", neither does the "type system prove things by induction".
Actually, the type checkers for languages like Omega are proof checkers/assistants for inductive/constructive logic.
They don't do extremely powerful proofs, so they're more checker than assistant, but they do some proving as not all details need be explicit.

By the Curry-Howard isomorphism, your program (a value) is a constructive proof of the theorem stated by your types, which is checked by the type checker. I recommend "Type-level Computation Using Narrowing in Ωmega" for a good introduction.

Also, Omega's type level does support recursion (aka induction), albeit restricted so as to be decidable (known to terminate).

By "equational reasoning with limited syntax" I assume it was meant that you can define a limited subset of Scala that allows reasoning about behavioral equivalence, which would be required for type checking in the presence of computation at the type level. You can certainly do that, and this is one of Omega's design goals in my understanding. They use a different evaluation mechanism (narrowing) for terms above the value level, but it is a subset of the same language as the language at the value level.

cheers
adriaan

Paolo Giarrusso

unread,
Jun 13, 2013, 5:08:35 AM6/13/13
to scala-i...@googlegroups.com, Rex Kerr, Miles Sabin
On Wed, Jun 12, 2013 at 11:00 PM, Adriaan Moors
<adriaa...@typesafe.com> wrote:
>
>>> There's no "equational reasoning with limited syntax", neither does the
>>> "type system prove things by induction".
>
> Actually, the type checkers for languages like Omega are proof
> checkers/assistants for inductive/constructive logic.
> They don't do extremely powerful proofs, so they're more checker than
> assistant, but they do some proving as not all details need be explicit.

> By the Curry-Howard isomorphism, your program (a value) is a constructive
> proof of the theorem stated by your types, which is checked by the type
> checker. I recommend "Type-level Computation Using Narrowing in Ωmega" for a
> good introduction.

> Also, Omega's type level does support recursion (aka induction), albeit
> restricted so as to be decidable (known to terminate).

I'm aware of Curry-Howard, that's the foundation of Agda (which I use)
and as you know of Coq (which I'm slowly learning), but I wasn't aware
of narrowing.

I was just clarifying that typechecking does not involve *automated*
theorem proving (which is an entirely different beast from interactive
theorem proving), and doesn't automatically do inductive proofs
(though you can write them by hand. I was describing what Agda does,
which I think is called "Reduction & unification" in the paper on
Ωmega you link to. Narrowing already goes *a bit* more in the
direction of automated theorem proving, compared with Agda. But I can
see why narrowing can be helpful.

> By "equational reasoning with limited syntax" I assume it was meant that you
> can define a limited subset of Scala that allows reasoning about behavioral
> equivalence, which would be required for type checking in the presence of
> computation at the type level. You can certainly do that, and this is one of
> Omega's design goals in my understanding. They use a different evaluation
> mechanism (narrowing) for terms above the value level, but it is a subset of
> the same language as the language at the value level.

Yes, but Rex was objecting to the expressiveness limitations. But as I
mentioned, I don't think those limitations are that bad (if you have a
good termination checker).

Cheers,
--
Paolo G. Giarrusso - Ph.D. Student, Philipps-University Marburg
http://www.informatik.uni-marburg.de/~pgiarrusso/
Reply all
Reply to author
Forward
0 new messages