OCamlScript might help any effort to make Haxe compiler self-hosting

315 views
Skip to first unread message

Rezmason

unread,
Jan 31, 2016, 11:05:55 AM1/31/16
to Haxe
Hey folks!

I know there are some people in our community feel that Haxe being able to compile itself would carry a kind of symbolic implication about its maturity as a language.

There are other people who probably think it'd simply be cool to compile Haxe in Haxe. I guess I belong in that camp, though it's not a personal priority.

As it turns out, the company Bloomberg has put a public project called OCamlScript on GitHub. It's supposedly in the early stages, but it claims to compile OCaml to JavaScript. This might be a useful step for transforming Haxe's OCaml source into code that is more similar to the Haxe syntax, so I wanted to put it on your radar. :-)

PeyTy

unread,
Feb 5, 2016, 3:31:03 PM2/5/16
to Haxe
Hello, Rezmason! Some time ago, I even wrote direct OCaml -> to Haxe transpiler, so JavaScript is "so yesterday". :D
But it sucked. Haxe compiler written in the unportable way, so nope.
I am still working on LLVM backend for Haxe, because main reasons for OCaml are:
- Native superspeed
- Bindings to native libraries
! But it lacks of fast macro execution.
Adding LLVM will solve macro problem, then replace Neko as Haxe primary VM, and only than we can rewrite compiler mostly FROM SCRATCH
and achieve good speed, nativness and self-hosting.
If you have experience in OCaml+LLVM help will be appreciated.
Have a good day!

Marcelo de Moraes Serpa

unread,
Feb 5, 2016, 5:55:50 PM2/5/16
to haxe...@googlegroups.com
then replace Neko as Haxe primary VM

You raise a good point there. Neko is pretty lacking in some areas (like lacking good debugging tools, and bindings to C libs). What other VM would be suitable for Haxe? Maybe V8? Would that even be feasible?


--
To post to this group haxe...@googlegroups.com
http://groups.google.com/group/haxelang?hl=en
---
You received this message because you are subscribed to the Google Groups "Haxe" group.
For more options, visit https://groups.google.com/d/optout.

PeyTy

unread,
Feb 5, 2016, 6:05:15 PM2/5/16
to Haxe
Hello, Marcelo! I tried V8, SO SLOW ITS EVEN NOT FUN!!! I spend two years already (wow!) in investigations of JIT/AOT tech, and finally aiming at creating Haxe very own VM atop of LLVM. This will also add features like optimization passes (to effectively replace current "analyzer" for native platforms), support for C/AsmJS/WebAsm generation, plenty of them.

I clearly *do not* want to spend my time to "fake tech" like transpilling Haxe again and again to gain less to no benefits.
Haxe must have own VM for macro, apps/libs distribution, and selfhosting.

I hope to join Haxe Foundation to work fulltime on this :)

Ashiq A.

unread,
Feb 5, 2016, 6:24:28 PM2/5/16
to haxe...@googlegroups.com
For game development, Neko is quite good. It's easier to setup than the debug version of Flash Player, and it's easy to package up Neko apps with all their dependencies together (without requiring an installer).

But, I never had a working Haxe debugger (or even auto complete in an IDE), so I can't comment about the debugging side ("trace" is as far as I go).

Marcelo de Moraes Serpa

unread,
Feb 5, 2016, 6:30:43 PM2/5/16
to haxe...@googlegroups.com
> I hope to join Haxe Foundation to work fulltime on this :)

Great :)

ping @HaxeFoundation!

PeyTy

unread,
Feb 5, 2016, 6:32:25 PM2/5/16
to Haxe
Thanks for feedback, Ashiq! How do you/everyone think, will it fine to have one non-JIT debuggable version of VM, and another JIT/AOT without debug possibility? LLVM while JITs transforms code so crazy (like C++), so your executed program is not what you actually wrote (opposite to Java, which always follows each command as is).
Non JIT VM will be just slowly intepreted, but full featured debugging.

PeyTy

unread,
Feb 5, 2016, 6:35:12 PM2/5/16
to Haxe
@Marcelo I must to make Proof-of-concept first! =)

Simon Krajewski

unread,
Feb 5, 2016, 6:53:21 PM2/5/16
to haxe...@googlegroups.com
Am 06.02.2016 um 00:35 schrieb PeyTy:
> @Marcelo I must to make Proof-of-concept first! =)

We (well, Nicolas mostly) are currently working on what it basically the
successor to neko. Check out genhl.ml at
https://github.com/HaxeFoundation/haxe/tree/hl.

It's not yet passing all unit tests and there's only the interpreter
right now.

Simon

PeyTy

unread,
Feb 5, 2016, 6:57:16 PM2/5/16
to Haxe
Hi, Simon! Lack of documentation is a key feature of new Neko-successor? :) Any info?

PeyTy

unread,
Feb 5, 2016, 7:13:06 PM2/5/16
to Haxe
It is very sad to read this https://medium.com/@ncannasse/some-words-about-haxe-foundation-e97a4e9d7e41#.mif22v3nl and then see such innovative "nobody knows what" piece of code.

Nicolas Cannasse

unread,
Feb 6, 2016, 4:06:28 AM2/6/16
to haxe...@googlegroups.com
Le 06/02/2016 01:13, PeyTy a écrit :
> It is very sad to read this https://medium.com/@ncannasse/some-words-about-haxe-foundation-e97a4e9d7e41#.mif22v3nl and then see such innovative "nobody knows what" piece of code.

Come on, don't take such a stance.
It's not yet announced and will not be until I consider it ready. I can
say it's still experimental as for now. Hence no doc.

Nicolas

Oleg Petrenko

unread,
Feb 6, 2016, 6:44:10 AM2/6/16
to haxe...@googlegroups.com
Wow, hello, Nicolas! Thanx for response! I ll study sources then and privately mail you about pros and cons, if you are open to discussion.

суббота, 6 февраля 2016 г. пользователь Nicolas Cannasse написал:
--- You received this message because you are subscribed to a topic in the Google Groups "Haxe" group.

Philippe Elsass

unread,
Feb 6, 2016, 8:11:29 AM2/6/16
to Haxe

Debugging? Debugging debugging debugging, debugging!

Speed? Bonus.

You received this message because you are subscribed to the Google Groups "Haxe" group.

Oleg Petrenko

unread,
Feb 6, 2016, 9:23:41 AM2/6/16
to haxe...@googlegroups.com
Booo? Boo boo boo boo, booo! xD
Yeas, Philippe, bububbing is very common request ;)
But speed is sooowaantedtoooooo :[
Having two VMs will solve both.
P.S. Seems like VM IL would look like..Typed Lisp! :O
Interesting, how looks Nicolass hl...hmm...

суббота, 6 февраля 2016 г. пользователь Philippe Elsass написал:

Hugh

unread,
Feb 7, 2016, 9:13:16 PM2/7/16
to Haxe
Just to work out what you are trying to do here.
You have put ocaml->haxe converter on hold, and decided to directly write a LLVM backend?
The idea being that this backend would then #1 run macros faster, and #2 produce executables in its own right?

While it is true that llvm can generate "ultimately" fast code, I think haxe performance is going to be more about how you handle Dynamic, interface inheritance, GC and a few other features.

It is also quite a big dependency - possibly in the order of the JVM?  If you had haxe-in-haxe, the JVM would be the easiest way to get JIT macros.
If I added JIT to cppia, then I think hxcpp would be the smallest/lowest dependency way to get JIT macros - again assuming haxe-in-haxe.

Hugh
 .

Oleg Petrenko

unread,
Feb 7, 2016, 10:08:49 PM2/7/16
to haxe...@googlegroups.com
Hello, Hugh!

| You have put ocaml->haxe converter on hold, and decided to directly write a LLVM backend?
My first trick is to generate IR binaries similar to Java .jar/.class via gennative.ml,
and load them into toolchain (which can 1) execute (jit and non-jit) or 2) transpile them (to C)).

--- UPDATE BEGIN
I had to re-read your questions! :) Misunderstood at begin! Other info is still relevant to read, anyway.

Okay, the basic idea is the "environment coherency".
| My first trick is to generate IR binaries similar to Java .jar/.class via gennative.ml
This allows to effectively compile any Haxe code to the compiler-indepedent form,
which we can then run in same VM we run our MACROCODE yeah!
So we achieve near-zero overhead of interopability of macro with compiler itself.

I was going to use LuaJIT for this, but it is..slow (to fully emulate Haxe) and abandoned (nobody fully understants how LuaJIT works, lol)..yup. Okay, you can read the rest of my answer :)
Thanks for your interest!
--- UPDATE END

| The idea being that this backend would then #1 run macros faster
I think it is reasonable to provide bindings to JITted VM written in C. I dont think this will slow down codegeneration.
But as a considerable simplification to overall maintenance of VM (only single version of VM, not "VM in C++" and "VM in OCaml", like with Neko).
--- UPDATE BEGIN
I meant if you add the LLVM JIT to the existing OCaml compiler.
--- UPDATE END

| and #2 produce executables in its own right?
The problem is that 1) Apple disallows direct assembles (uses own bitcode that takes only C/C++/ObjC/Swift) and 2) some tech limitations (like Clang has no good linker on windows, linkers themself are too big to distribute with Haxe, etc) moved me to the decision of outputting the C language as a final IR, with special optimizations, and then compile with GCC\Clang\Emscripten\etc.
BUT
User can use JITted VM instead of pre-compiled executable (like Neko).
So we have both VM and C options, no direct executables, I dont see the real-deal of this feature.

| While it is true that llvm can generate "ultimately" fast code, I think haxe performance is going to be more about how you handle Dynamic, interface inheritance, GC and a few other features.
This all... no, I mean, really ARE the problems which aims LLVM. I remember you had used SLJIT, but LLVM proides so much possibilities in runtime optimizations, so *Haxe essentially may become to be a language **without** of Dynamic or Interfaces overhead*. I studied how LuaJIT solves this, and how this is done in LLVM-based JavaScript interpreters. Basically if done right, LLVM can on-the-fly patch pointers in memory, so even v-tables lookups may be avoided.
And I event didnt started to talk about mix of *custom*+builtin optimization passes, which may outperform current analyzer in lowlevel, and I beleave high/medium level stuff.

| It is also quite a big dependency - possibly in the order of the JVM? If you had haxe-in-haxe, the JVM would be the easiest way to get JIT macros.
I see that Haxe is more like C++, than Java-like kind of language.
You can open Haxe Java std, and see how much tricks it requires to properly emulate Haxe atop JVM.
And... This is out of scope of my project: use native tools to generate and run native stuff.

Oh, yeah, Hugh! To not be proofless! You know, I tried to write some high performance code in Haxe utilizing Neko, V8, Mono and HXCPP. 
HXCPP was 100 times faster of all of them! And when I added C-specific magic, it became ultimate!
Totally impossible with any of other runtimes.

| If I added JIT to cppia, then I think hxcpp would be the smallest/lowest dependency way to get JIT macros - again assuming haxe-in-haxe.
"Just add JIT" gives you no benefits, I think. You can eventually replace CPPIA with current LuaJIT target, which we develop with Justin and Simon, and you will see that LuaJIT will be much faster then the CPPIA.
I havent tested myslef, and LuaJIT target requires some optimisations for now, but you can test yourself and tell me results! would be great to see them!
If you will investigate how Make Pall implemented interpreter in LuaJIT, you ll be shocked. And even more shocked about JIT. This all requires many years of investiment, but we can just use LLVM, and deal with it.

понедельник, 8 февраля 2016 г. пользователь Hugh написал:
--

PeyTy

unread,
Feb 7, 2016, 10:34:24 PM2/7/16
to Haxe
As about OCaml->Haxe project, I want to "reboot" it from scratch.
Make a new version of o->h converter, which generates much better code.
But this makes sense only if we have proper replacement of OCaml runtime.
So not today, not tomorrow.

JLM

unread,
Feb 9, 2016, 8:05:28 AM2/9/16
to Haxe
I have been having fun and games with push in as3 and Array pooling  at work :( .
So I was just curious if there are plans to have pooling built into HL or will the memory be designed so pooling is not really needed?
Had a quick look at how arrays grow.

(size * 3) >> 1;
https://github.com/HaxeFoundation/haxe/blob/hl/std/hl/types/ArrayObj.hx#L212

But wondered how expensive creation of an array or list would be on HL.
https://github.com/HaxeFoundation/haxe/blob/hl/std/hl/types/ArrayObj.hx#L10

And if Pooling had been a consideration in the design or even relevant.


Nicolas Cannasse

unread,
Feb 9, 2016, 4:33:23 PM2/9/16
to haxe...@googlegroups.com
I have not yet worked on the GC, but I expect allocation to be quite
fast, making pooling not that much interesting unless for cases when you
do a lot of recycling (particles spawning/dying for instance).

Best,
Nicolas

Reply all
Reply to author
Forward
0 new messages