So, again i hop someone will take this code and build it as it should, because if this tool works, it wll be a HUGE bonus for haxe, one that google dart doesnt even have yet... im here if you have questions,
note: i provide this code as is, with no warranty. i also dont have any time to put into this, so its a donation (hacky code anyways ;)).
note 2: im having troubles uploading the code, so i posted the code on this URL until i can get it to work on google groups.
I also think Haxe for js dev can be the best thing since sliced bread. This sounds interesting, why not pushing to github? :)
I totally agree that the maintenance of externs is a pain.
As a side note though, I think the best thing would be to somehow include that into the Haxe compiler itself, as part of the js-gen stuff? Or, create a build tool and compile to Neko.
My two cents,
|unk...@googlegroups.com||5/15/12 6:35 PM||<This message has been deleted.>|
And here is a sample haxe file generated by the tool for Three.js version r49. Sure its not perfect, but its pretty good considering it was created by a single day hackathon and that the tool generated this file in a single second.
Again if another dev gives it a few days of proper love, REALLY good stuff can come out of this idea for haxe.
Looks like a good direction, three.js is a particularly complex source...
nice idea- it seems to produce a lot of Dynamic's tho - wouldnt this
make the usefullness of haxe a little limited ?
Le 16/05/2012 03:18, Huck a �crit :
> Hello Haxers,[...]
While I completely agree with the issue itself, I'm not sure an
automated tool is the best solution.
Haxe developers should expect that quality externs are available, and by
"quality" I mean that people took time to read the library documentation
and strictly type the extern API, add the overloads, etc.
I think there's several approachs for that :
- using an automatic tool to allow to extract the first "untyped"
version, then work on make it typed
- extract directly the typed version from the HTML documentation of the
library, but there is no real documentation standard and it can break
easily if they change the output/engine
For instance, I'm using an automated extracting tool for Flash API -
which is a lot more easy since it's strictly typed - but I still have to
do some extra manual work to add a bit more strict typing (using enums
instead of String where it makes sense) and other things such as
@:require version annotations.
In the end, I think what we need is people who have enough time to
dedicate maintaining good quality externs, I'm not sure we can do it
Anyway thank you for the effort, I'm sure it will help other people.
Might be quite complementary with what Joshua is doing.
Here are some clarifications
now, sadly i dont have any more time to dedicate to this tool, im on short shcedule project, but i wanted to get the ball rolling and demonstrate the large potential it has. This tool will make a huge difference in haxe ecosystem, but for this it has to be developed. the code i posted here is only a prototype, but it needs a developer assigned a few days on it to start from scratch and redesign it correctly using Jint and build this tool.
i really hope someone can take the ball from here and work on it. and remember, even google dart doesn't have it and its a huge limiting factor for it as well. here is a chance to get ahead before coffeescript takes it all...
The tool looks very promising.
I have also been working on a similar tool for PHP, which takes
peardoc xmls and generates externs from them.
I also made one before for Java / javadoc, when I was working on HaXe
+ Red5 with Rhino...
I have never released any of them because there are still two quite
big issues I never really managed to solve:
- you'll likely need to manually change some definitions, to type them
more, change them or correct them, etc,
and then all this is deleted when you generate the externs again...
So you need a good "patching" mecanism to have enough flexibility.
- in PHP there is a quite annoying issue with (anonymous) objects vs
Most PHP APIs will use associative array, so you have to convert these
to anonymous at runtime, and the other way round...
I think it may be possible to do this more or less cleanly with
macros, but I have never had the time to really try that.
Anyway, I have had a quick look at Joshua's BuildHx and it looks
It seems to solve my first issue by providing a nice way to alter
generated definitions with a .xml "patch".
Moreover it seems modular and easy to extend. This was not the case
for my projects either...
I think we should join forces on this, and use a common structure to
solve the extern generation, and BuildHx is a very good start.
I'll try to add php support to it when I have some time.
I think you may also want to check this project and maybe contribute.
cool, now were talking :)
now buildhx still seems to require JSDuck tags, and xml definitions, these must still be written and maintained, its one evil for another. So the use of Jint could be the perfect complement to buildhx. Instead of generating .hx externs with Jint, we could generate the xmls, and then merge the changes with existing ones. Then after use buildhx to compile them. The way we do it to me is irrelevant, as long as we do it.
in the end, i agree we should all give some time to this issue, as it will bring more users, and thus a bigger and more alive community; good for us.
just ideas here. and in another design approach it could be used to generate either JSDuck metadata for buildhx, or the xmls themselves.
From what I understand BuildHx "currently" uses JSDuck,
But one could create his own parser to support other input material
At least, that's what I was planning to do for PHP/peardoc...
BuildHX was built off an older tool I created which was able to parse
JSDuck output to generate JS extern classes automatically.
I recently improved on the prior tool to open the door for multiple kinds
of parsing, as well as manual definitions in an XML file. When you are
able to parse documents to generate the definitions automatically, you may
still need to tweak or adjust the types by hand. BuildHX is designed to
allow this workflow.
Fintan Boyle recently contributed a YUIDoc parser, so you can use BuildHX
to generate externs for libraries like the EaselJS framework.
You can use BuildHX in cases where you do not have an automated solution
to load the types and definitions for the target library. The "joapp" and
"box2d-native" projects that I have on GitHub are both manually defined.
If you are able to automatically derive types using JInt, you could create
a JIntParser for BuildHX, which could read a source directory and populate
the types it finds there:
<source path="my/js/files" parser="jint" />
Similar to what I have done for Sencha Touch, you could fill in the blanks
manually with the rest of the XML definition file for any given library.
I am also investigating Doxygen, which can help export types for C++
libraries. It would be great to have an automated path for generating C++
BuildHX is not written in C#... it's in Haxe, of course, but that makes it
easier for it to run on multiple platforms using Neko.
I recently improved on the prior tool to open the door for multiple kinds of parsing, as well as manual definitions in an XML file. When you are able to parse documents to generate the definitions automatically, you may still need to tweak or adjust the types by hand. BuildHX is designed to allow this workflow.
That's very cool!
BuildHX is not written in C#... it's in Haxe, of course, but that makes it easier for it to run on multiple platforms using Neko.
I can try to compile it with the c# generator, It'll be a good test, and we'll be able to use Jint! :)
Cool! things are getting really interesting now :) I think using Jint to supplement buildhx is a great idea.
I have trouble finding free time, but at the same time i believe this is a really good cause, so let me know if i can help in any way (only can give so much time though).
for one i've had some experience yesterday with the output from Jint, so i can definitely give some advice on how to use it.
first, throw away the code i did yesterday, its a mess and should be considered for instructional purposes only.
might seem complicated but its actually pretty simple, the trick is just to build a proper iterative analyser for the Jint DOM. let me know if there is anything else i can do to help.
Since BuildHX is compiled for Neko, it can be distributed and used from haxelib. Ideally, I would like to continue this setup, so I don't think the tool should be compiled as C#.
However, we could create a standalone Jint-based executable and put it in the "/bin" directory along with jsduck and (probably) doxygen. It could accept a directory of JS files, and output type information in JSON or XML files.
These could either be formatted natively for BuildHX, or BuildHX could have a parser to understand it. I would recommend the latter, since that would allow much more verbose type information (dumping almost all the information you can get) which we could then sort through and use. Although there is information that would not be used for now, you never know how things might grow in the future. Having a verbose output format may make it easier to improve later in BuildHX.
yeah, this is how i would see it too, as an external utility tool to be invoked from buildhx. thanks to mono it can run pretty much on any platform neko can.
Yes, makes sense. Even though the neko app.n could just invoke e.g. mono or the executable, it really means that we will have to depend on the user having mono or .Net runtime installed.
Ok, seems good! I'll leave it to use the C# target for an actual project later then ! hahaha