Haxe performance

323 views
Skip to first unread message

michael solomon

unread,
Jul 19, 2017, 4:23:14 PM7/19/17
to Haxe
I saw this nice jewel: http://code.haxe.org/category/functional-programming/enum-gadt.html and I'm wondering if and how much haxe's features are costs in performance, I mean I'm sure that haxe guys try to optimize things and if I want to implement those features in the targeted language I will do no good than haxe compiler but maybe there is things that just not done in some languages because performance issues.
Maybe there is other more basic and common things that haxe compiler have bad influence on code'performance?

David Elahee

unread,
Jul 19, 2017, 4:29:54 PM7/19/17
to haxe...@googlegroups.com
To answer correctly, we really need to know the intended use case and data quantity :)
Reading well formed tree is blazing fast, combining arbitrary data causes allocations so you have to be careful about how much you stress these shiny toys...

2017-07-19 22:23 GMT+02:00 michael solomon <miche...@gmail.com>:
I saw this nice jewel: http://code.haxe.org/category/functional-programming/enum-gadt.html and I'm wondering if and how much haxe's features are costs in performance, I mean I'm sure that haxe guys try to optimize things and if I want to implement those features in the targeted language I will do no good than haxe compiler but maybe there is things that just not done in some languages because performance issues.
Maybe there is other more basic and common things that haxe compiler have bad influence on code'performance?

--
To post to this group haxe...@googlegroups.com
http://groups.google.com/group/haxelang?hl=en
---
You received this message because you are subscribed to the Google Groups "Haxe" group.
For more options, visit https://groups.google.com/d/optout.



--
David Elahee


Dan Korostelev

unread,
Jul 19, 2017, 7:10:27 PM7/19/17
to Haxe
Unfortunately there's no simple answer to this, it depends a lot on actual use case and the target you want to use.

If we take JS, I believe Haxe generates the best output out there with regards to performance: parametrized types are zero-cost, inlining and optimizations found in current Haxe compiler are doing really well, etc.

Talking about e.g. C++ it's a lot more complicated, because in plain C++ we can control our memory structure and execution flow on a very low level, while Haxe adds its own layer (e.g. GC), so you may lose some performance due to boxing and casting (with e.g. non-@:generic type parameters) and most importantly - extra allocations in comparison to perfect C++ code written for a specific task. OTOH Hugh (C++ target maintainer) is doing a great job constantly improving Haxe/C++ run-time (e.g. latest developments on generational GC) and data structures (e.g. last year's enum value storage improvements).

So I'm pretty sure even if not perfect, Haxe-generated code performance is good enough when it comes from good source code and in 90% cases the performance bottleneck won't be caused by Haxe itself, but by the actual project code.

That said, if you care about performance, you should have some idea about what causes allocations and puts stress on GC, so you don't allocate hundreds of objects at 60fps :) Good news is that targets are not very different in this aspect, and the common sense applies for all of them.



среда, 19 июля 2017 г., 23:23:14 UTC+3 пользователь michael solomon написал:

Jeff Ward

unread,
Jul 20, 2017, 1:55:03 PM7/20/17
to Haxe
> I'm wondering if and how much haxe's features are costs in performance

That's a smart thing to wonder. But indeed, you have to look at each use case / output. Haxe does provide tools to help you generate the best code possible (inline, abstract enum, dce, static analyzer, etc), but there are certainly places where "the features of Haxe cause less than ideal implementations." e.g. I believe Lars has noted several times where certain practices result in Dynamics in the hxcpp target.

Consider enums -- whereas I've always coded them in other languages with static consts, in Haxe they generate Objects or Arrays in the target language. This supports type safety and pattern matching and runtime type identification on enum values, but they're obviously a larger, more expensive construct than static constants. You can use abstract enums, but 1) you don't get as many cool features, and 2) you had to know about those in the first place.

There are also various features (closures, optional or nullable types? others?) that promote seemingly simple data types to Objects. These are often great features, but again, be aware of what they generate.

On the other hand, there're things like abstract enums and fully-inline abstracts that result in no runtime penalty at all.

I'm sure various people have these notes in their heads. I wonder if there's a list somewhere.

But, bottom line, in performance-critical code, definitely be wary, use profiling tools of the target language, and know what's getting generated. This is true for any cross-compiler.

Best,
-Jeff

michael solomon

unread,
Jul 20, 2017, 2:48:03 PM7/20/17
to Haxe
Thanks guys,
I understand it's depends  on use case or target language and I will be happy if one of haxe's experts will write some guidelines for each target. or maybe it's already exists?

Juraj Kirchheim

unread,
Jul 27, 2017, 6:17:22 AM7/27/17
to haxe...@googlegroups.com
Just to clarify: it will vary not only between target language. Take JavaScript. You will get different results between the runtime, e.g. SpiderMonkey vs Nitro vs Chakra vs V8 (where you have to distinguish Crankshaft vs TurboFan + Ignition). And the results are further influenced by the process architecture the engine runs on. And a gazillion other things. The same is true for hxcpp and hlc (adding all possible compiler optimization flags to the matrix), also for hxjava/hxcs to some degree. There's gotta be at least half a dozen well supported Python runtimes as well. So: all in all, it's impossible to make any kinds of predictions here.

In all seriousness, isolated measurements of language feature "performance" (whatever that means) should not dictate how you code. The cost of a specific component is only meaningful in the context of the system it was integrated in.

Build the damn thing, avoid overengineering like the plague and then start measuring things in the real world. Then iterate your design, with all the knowledge you have gained about the actual performance related and architectural challenges of the problem domain. Doing it the other way around is very much like taking a leap into the unknown while putting a sling around your neck to avoid falling into the abyss.

Worthwhile reading: http://wiki.c2.com/?UniformlySlowCode (and also the pages it references, in particular http://wiki.c2.com/?ProfileBeforeOptimizing and http://wiki.c2.com/?OptimizeLater).

Best,
Juraj

--

Juraj Kirchheim

unread,
Jul 27, 2017, 6:25:53 AM7/27/17
to haxe...@googlegroups.com
Also, I should add that the compiler team is constantly working to decrease the cost of Haxe's abstractions. 

So if you dismiss language features that would put a simple and elegant solution in your reach and instead contort your mind to come up with a solution that is supposedly faster (you can't actually know that without measuring both against one another) and then the features you have avoided become optimized to be faster by a sufficient factor, all that hard work was for nothing.

michael solomon

unread,
Jul 29, 2017, 7:48:31 AM7/29/17
to Haxe
Thanks for clarification!!

kobi2187

unread,
Jul 31, 2017, 2:19:33 PM7/31/17
to Haxe
What I like is the fast compilation speed, so development is fast. Things are also pretty self explanatory and just work. the ideas are simple, yet elegant.
and you have a portable codebase, of course. so you can change targets if you need a faster executable, or better debugging, etc.
some of the target languages, also have tools to compile them further with llvm, but this is really the last stages.
I would recommend watching the scale of things, and choosing correct data structures. if you really need more performance than that, try profiling, and verifying that the actions that happen in the code match the data structures well.
Those are all general suggestions. I feel performance concerns should not dictate your code style. with Haxe you get quick development, generated code that is faster than scripting languages, and portability. plus the language is really nice, and with tink's extensions/macros you get more advanced language level features, to make some language concepts very convenient and concise.
Some programmers in embedded or games care about performance in a premature optimization way, and they're willing to limit themselves to very sub-standard and minimal sdks (e.g remember java-me?). I think that's a mistake in the long term.


Just my 2c, kobi
Reply all
Reply to author
Forward
0 new messages