Hi,
The issue of using an existing library is that when we have a specific need, it is quite hard to properly adapt that.
Genericity also has a cost, and if we have to convert the generated structure, it has also a cost.
We do not use a parser to visualize an AST, but to generate JS code from Python.
Therefore we do not need a generic AST structure, and can tailor make our own with all of the informations we need (e.g. positions in Py code, type informations, operator priorities, etc), putting the children in the order we want/need, etc. We can also insert (and remove) our own checks (e.g. Python parser error message), etc.
In SBrython, I only do the AST2JS part, and use Brython's AST that I convert into my own format.
> Would also like to see timings from chrome to compare.
I didn't measure it initially. But Chrome optimization is too greedy to benchmark. For exemple, when running the tests twice, it will cache the generated JS.
But yeah in Chrome I win more 10ms.
Next WE (or the next) I think I work on my Editor to have better stats.
> The timing numbers make me a bit suspicious - showing so much reduction
across the board in parts you never changed. Could be fewer data
objects and less GC runs, as you say. Or maybe the measurements are off
somehow.
I had something like 14,700 AST nodes in total in my tests, for each (a type, an array of children, a toJS function, a result type, a value, py+js position beg/end + col/line so like ~20 objects per AST node, so up to 294,000+ objects to check when the GC needs to (de)allocate.
Now I have 3 types arrays, and only one array of values to check.
And Pierre's Py2AST is performing LOT of allocations, so the GC is likely to be called a lot too.
And the GC is likely to be called at the start of py2ast to liberate the data of the previous test file.
Cordially,