[ANN] Tinkerbell updates. Faster compilation, easy expression matching, for loop magic, ...

179 views
Skip to first unread message

Juraj Kirchheim

unread,
Oct 15, 2012, 6:21:28 AM10/15/12
to haxe...@googlegroups.com
Hiho,

I'd like to make an announcement about a couple of important (or so I
feel) updates I've pushed to haxelib today.

## Leveraging the compiler cache

Because tink was released before macro reification, some if its code
called tink.macro.tools.AST.build which does about the same thing
(plus a couple of handy extras) but is a macro. The resulting
macro-in-macro call would prevent from a lot of tinks code to be
cached. Also there were calls to Std.format, which caused the same
problem.
But from this day forth, tink is fully cachable!

## Expression matching

Once tink became rather good at building expressions (and even more so
after macro reification also made it simpler), one of the remaining
difficulties in writing macros gained importance: traversing/parsing
the AST. Expression matching is the most intuitive and simple way to
do this, and that's what ExprTools.match does as documented here (the
last function of this section at the time of writing):
https://github.com/back2dos/tinkerbell/wiki/tink_macros#wiki-expression_tools_advanced

In fact this was an undocumented feature for quite some time:
https://github.com/back2dos/tinkerbell/commit/d5f34f2ea1729bde9ec49b3ad5a3a99ce7bc3480
The API is now largely final, but if I broke your code, next time
please do not hesitate to ask me before using an undocumented feature
;)

## Extended for loop syntax

After last week's discussion on what I'd summarize as "the limitations
of Haxe for loops", I've now completed the first version of my ideas
for an extended for loop syntax here:
https://github.com/back2dos/tinkerbell/wiki/tink_lang#wiki-for

In a nutshell, you can now easily iterate over multiple targets
simultaneously and you can have numerical loops with arbitrary steps
in both an upward and downward direction.


That's pretty much all for today, except for a couple of minor
enhancements. Thanks for any feedback.

Regards,
Juraj

Franco Ponticelli

unread,
Oct 15, 2012, 9:03:09 AM10/15/12
to haxe...@googlegroups.com
Wonderful release, thank you!

Franco

Jason O'Neil

unread,
Oct 15, 2012, 9:21:43 PM10/15/12
to haxe...@googlegroups.com

Now that's kind of bad luck for John and Jack. Luckily there's one person they can always lean on:

hahaha, love the documentation :)

The pattern matching for macros looks very cool too, thanks for the update - a very useful library!

Jason
...

Justin Donaldson

unread,
Oct 15, 2012, 9:48:12 PM10/15/12
to haxe...@googlegroups.com
Tink is great, thanks for the update.

-Justin
--
blog: http://www.scwn.net
twitter: sudojudo

Juraj Kirchheim

unread,
Oct 20, 2012, 6:44:28 AM10/20/12
to haxe...@googlegroups.com
So I've played around with for-loops a little more, focusing on
performance this time.

With the head revision of tink, it is now possible to add `@:tink_for`
metadata to a class to define custom iteration rules. Example:

@:tink_for(init, hasNext, next) class C {
...
}

Which leads to following transformation:

for (i in new C()) body;
//is transformed into:
{
var target = new C();
init;
while (hasNext) { var i = next; body; }
}

The output is a bit simplified here. It looks more complicated if
there's multiple loop targets or fallbacks or the body has jumps.
For neko Arrays for example, this is the appropriate annotation:

@:tink_for({ var i = 0, l = this.length, a =
neko.NativeArray.ofArrayRef(this); }, i < l, macro a[i++]) @:core_api
@:final class Array<T> { ...

Tink will do the necessary variable renaming and so on. In this case,
the resulting code is about 4.5 times faster than the loop generated
by Haxe (given a bare body - which is far from a real world scenario).
Such metadata is added to a couple of core classes from
tink.lang.macros.LoopSugar to get faster loops. On neko I was able to
get faster iterations by factors of 4.5, 2.9, 6.2 and 2.9 for Array,
IntHash, List and haxe.FastList respectively.

Also, because tink makes the for -> while transformation by hand,
iterators generally become eligible for inlining with Haxe 2.10
already.

This is still at an early stage, but I feel it's a good time to get
some general input, but also hints on the fastest native ways to
iterate over haxe core types. For most parts this might seem a little
overkill, because the time consumption of a loop usually comes from
the body. But if you can get this kind of speed-up for free I think
it's worth it. Also I was able to get more than factor 10 times faster
for a blank loop over an IntHash on php and 12 times faster over a
list on avm2, which does sound like it could really make a difference.

Currently I am looking into a couple of issues:

- sub classes probably shouldn't be able to override this behavior, as
it is equivalent to inlining so to speak. OTOH if a subclass has the
necessary information for a faster iteration, than one should be able
to leverage it, as long as the iteration will still use the same
values - which is almost impossible to statically check in general.
Maybe an additional `@:tink_for_override` should allow to do this ...
at your own risk.
- it should be possible to add optimization rules for methods also,
e.g. the keys-method on IntHash/Hash. In fact iterating over the keys
of a hash is more common than iterating over the values from my
experience
- one should have the option to merely delegate the iteration to
another expression
- generated classes should build the actual methods from the
optimization rules, i.e. if you specify `@:tink_for`-rules, then
`tink.lang.Cls` should be able to also generate an iterator method
from that. Having basically the same code twice in one class doesn't
seem like a good idea

I'd be grateful for any thoughts, especially on the first point :)

Regards,
Juraj
Reply all
Reply to author
Forward
0 new messages