Unfortunately there's no simple answer to this, it depends a lot on actual use case and the target you want to use.
If we take JS, I believe Haxe generates the best output out there with regards to performance: parametrized types are zero-cost, inlining and optimizations found in current Haxe compiler are doing really well, etc.
Talking about e.g. C++ it's a lot more complicated, because in plain C++ we can control our memory structure and execution flow on a very low level, while Haxe adds its own layer (e.g. GC), so you may lose some performance due to boxing and casting (with e.g. non-@:generic type parameters) and most importantly - extra allocations in comparison to perfect C++ code written for a specific task. OTOH Hugh (C++ target maintainer) is doing a great job constantly improving Haxe/C++ run-time (e.g. latest developments on generational GC) and data structures (e.g. last year's enum value storage improvements).
So I'm pretty sure even if not perfect, Haxe-generated code performance is good enough when it comes from good source code and in 90% cases the performance bottleneck won't be caused by Haxe itself, but by the actual project code.
That said, if you care about performance, you should have some idea about what causes allocations and puts stress on GC, so you don't allocate hundreds of objects at 60fps :) Good news is that targets are not very different in this aspect, and the common sense applies for all of them.
среда, 19 июля 2017 г., 23:23:14 UTC+3 пользователь michael solomon написал: