Part I: Evan is already aware of this issue, you can see the full discussion in this github issue: https://github.com/elm-lang/elm-compiler/issues/1528The short of it is that the code generator part of the compiler is not aware of type information, and adding this would be a serious undertaking. It will probably come at some point, as there are interesting optimizations that can be made, like the one mentioned
Part II: Bitwise operators are inlined in 0.18, which gave a noticable performance boost in libraries like Skinney/elm-array-exploration and mgold/random-pcg. Having it available as a function (just like +, -, * etc.) allows more flexibility like currying.
Elm is, at least currently, a language to make web applications. How often are you going to run calculations in tight loops where the performance impact of comparison or equality operators gives a noticeable performance impact for the end user in a web browser?
I definitely could have done without the sneering title of this post.
This could, however, feed into the questions over Elm's interface to JavaScript. The easier it is to escape to JavaScript, the easier it is to say that Elm is fast enough for the vast majority of code that matters and if that doesn't cover a tight loop in your code, you can write just that part in JavaScript.
I've spent a long time working in Lua which has a fairly fast implementation as scripting languages go, but also had a very straightforward and powerful escape hatch to C. The former meant that one could expect to write most logic in Lua and not worry about performance. The latter meant that if performance were a problem, it would be easy to address when it came up.
The net from this for Elm is that I would argue that compiler optimizations can focus on easy wins provided that it is easy to move a computation to JavaScript when that doesn't work. At this point, however, that isn't possible because launching a computation through a command and waiting for a result back on a subscription where that result must be distinguished from all other results from other command driven requests is a mess. At the very least, please give us task ports so that we can readily fire off asynchronous computations with results — though for use as an escape hatch when Elm is too slow, I can feel the pressure for a synchronous native code extension mechanism.
But we stray from the subject of "Elm as fast as JavaScript"...
My point is that I love the practicality of Elm with its simple yet functional syntax and would like never to be forced to use JavaScript or even Typescript again, but am forced to do so for certain types of applications due to these inefficiencies.
--
I'm not saying that performance of tight loops aren't important, what I'm questioning is how much comparison and equality operators affect them. I would assume that higher-order functions and immutability has a much bigger impact on tight loops in games, graphics and intensive math applications.
My point was that the calls for Elm optimization could largely be mitigated through a combination of fast enough for most purposes (arguably already there) coupled with a reasonable escape hatch to the host environment (as of now JavaScript and not really there) for the cases when it isn't. I'm suggesting that rather than investing in fancy compiler changes, some changes in the runtime architecture to allow performance intensive central loops to be written in other ways would be cheaper, simpler, and would arguably address some of the other issues that have floated up recently.Elm has a weird ambivalence about native code. On the one hand, Evan has pushed back pretty hard in the past out of fear of getting tied to crappy JavaScript libraries and has pushed for doing everything in Elm. This is reflected in the relatively small amount of native code based work in the standard Elm package repository. On the other hand, when pushed on how one is supposed to do something that is currently difficult in Elm — e.g., some MDL conventions — others in the community have pushed for using native code rather than trying to write it in Elm.My recommendation would be a middle path that says we believe most things are better in Elm but here are its limitations and here is how you can cleanly and easily reach outside of Elm when those limitations prove to be a problem. While the currently documented mechanisms may be clean, they aren't easy for many cases. Ports are extremely awkward for cases that require both a request and a response to that request. Effects managers are undocumented and discouraged. And native modules themselves are barely even admitted to let alone documented as a thing. Improve that situation and any concerns over both performance and interfacing to the broader world are substantially mitigated and Elm can better focus on its core mission. Fail to improve that situation and every effort to grow the market for Elm will also grow the pressure for it to do everything well.
--
Just to make it clear, I'm not particularly calling for an easier way to create "subscriptions". I'm calling for a way to do one of the following — either is fine, each have their pluses and minuses:1. Expose a synchronous, externally defined function that takes JSON (or really anything a command port supports) in and returns JSON (or really anything a subscription port supports) out.- or -2. Expose a factory for externally defined asynchronous tasks where the task when executed receives JSON (etc) in and resolves to JSON (etc) when finished.
--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.
It looks like the implementation of this StackOverflow answer (ignore the question and relevance as it is old, but the calling a function `getParentPos ()` to get some answers through some JavaScript called through a port, which returns the answers through a subscription `parentPos`that fires a message to update should still work. This is fine as long as there is only one subscription, but as Mark says there is no Event Manager so it would likely get confused if there were other subscriptions such as say `Timer.every`.If this worked generally, it would be an easy way to transfer to a long running program written in JavaScript, with that long running program able to fire progress reports back and even a different completion Msg, but would be fairly useless if it didn't work with other subscriptions without adding a lot of code.
So you are saying we can already do the following:type Msg = ProgProgress Progress | ProgCancelled | ProgDone Answer | Tickport startLRP : () -> Cmd msgport cancelLRP : () -> Cmd msgsubscriptions model =Subs.batch[ lrpProgress ProgProgress, lrpCancelled ProgCancelled, lrpDone ProgDone, Timer.every 1000 Tick ]port lrpProgress : (Progress -> Msg) -> Sub Msgport lrpCancelled : (() -> Msg) -> Sub Msgport lrpDone : (Answer -> Msg) -> Sub Msgwith the Native JavaScript as per the link for each of the ports, Types all defined, update function handling all Msg cases, and the subscription system will automatically handle getting each subscription to the right Msg?Will this not cause problems with the Timer.every subscription?
As for doing pure but computationally intensive work in a Task, you can do it in Elm:
Task.succeed () |> Task.andThen \_ -> do hard stuff
This won't work for inner render loops, and the asynchronous Elm will still be slower than JS, but it may be useful in some circumstances.
Task.succeed () |> Task.andThen \_ -> do hard stuff
runSimulation : SimulationParams -> Task SimulationError SimulationValue
runSimulation : (Result SimulationError SimulationValue -> msg) -> SimulationParams -> Cmd msg
If we can do all of that, I don't see what Mark is worried about? We don't have to have an Event Manager? What's the point of a Router/Event Manager? Ah, I think it's to do with queue'ing messages to process later whereas this will process them immediately?If this all works, we could write LRP's in JavaScript or optionally move them to Elm when it gets more efficient.
--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.
Even if this method works, the interim method means I have to produce JavaScript code which I understood that the purpose of Elm was to avoid. If one still has to write large parts of the code in JavaScript for this particular application, one then starts to ask why not use TypeScript directly (c#'ish syntax) or Kotlin with the JavaScript back end (more concise, can be very functional in use, quite a bit more advanced stage of development than Elm). For me, these are the implied questions this thread raises.
Even if this method works, the interim method means I have to produce JavaScript code which I understood that the purpose of Elm was to avoid. If one still has to write large parts of the code in JavaScript for this particular application, one then starts to ask why not use TypeScript directly (c#'ish syntax) or Kotlin with the JavaScript back end (more concise, can be very functional in use, quite a bit more advanced stage of development than Elm). For me, these are the implied questions this thread raises.
Related to speed, it seems to me that the working Fable (F#) code linked from your link above is more responsive than the working Elm code from another link on that page, both for the same sample application; am I imagining things or do you see that too?
I used ghc 7.10, which doesn’t work out of the box with ghcjs. I cloned ghcjs master and used these instructions: https://github.com/ghcjs/ghcjs/wiki/GHCJS-with-GHC-7.10 from the wiki to install it properly. The quick start on the ghcjs readme doesn’t quite mention everything. In particular, you must bootstrap like:
ghcjs-boot --dev --ghcjs-boot-dev-branch master --shims-dev-branch master
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
Synopsis: Some, including Evan, maintain that Elm can be "faster than JavaScipt". While that may be true for some use cases including use of perhaps more efficient UI updates due to compartmentalized use of VirtualDOM, the actaul Javascript code generated by the compiler is not very efficient for many/most tight code cases. The reason is often unnecessary nested function calls that could easily be eliminated by making full use of the type information that the Elm compiler has.Part IAn example; The following tight loop doesn't really do anything, so should therefore compile into the very tightest of code (and I'm not expecting the Elm compiler to recognize that the result is actually known at compile time):range : Intrange = 1000000000testProg : Int -> InttestProg n = -- do some worklet lmt = min (n + 100000000) range inlet loop i =if i >= lmt then i elseloop (i + 1) in loop nwhich compiles to the following JavaScript:var _user$project$Temp1482759649866537$range = 1000000000;var _user$project$Temp1482759649866537$testProg = function (n) {var lmt = A2(_elm_lang$core$Basics$min, n + 10000000000, _user$project$Temp1482759649866537$range);var loop = function (i) {loop:while (true) {if (_elm_lang$core$Native_Utils.cmp(i, lmt) > -1) {return i;} else {var _v0 = i + 1;i = _v0;continue loop;}}};return loop(n);};All right, the code looks fairly good, and we can see that for the inner `loop` function that the compiler used its new capability to do tail call optimization and turn it into a while loop. Also, one might expect that any decent JIT compiler such as Chrome V8 will use constant folding and get rigd of the `_v0 variable. However, the real limitation of this loop is the call to the `Native_Utils.cmp` function. Function calls are expensive at 10's of CPU clock cycles each!The pertinent JavaScript for `Native_Utils.cmp` is as follows:var LT = -1, EQ = 0, GT = 1;function cmp(x, y){if (typeof x !== 'object'){return x === y ? EQ : x < y ? LT : GT;}...Note that there are three native branches here (in addition to the enclosing one): one for the check to see it the arguments are objects (which of course they are not in the case of Int's as here or Float's as the compiler well knows), one to check if they are equal. and (given that generally they won't be equal most of the time) one to see which is greater. Now these conditions are not so bad by themselves as they are very predictable for modern CPU's branch prediction (i is almost always < lmt), so that will cost at most a few CPU cycles; However, the call to the function in the first place will cost 10's of CPU clock cycles!Given that the compiler already knows and strictly enforces that the arguments are both Int's or Float's (which are both just Numbers in JavaScript), there is no reason that it cannot directly output (i >= lmt) instead of the function call and make the whole inner loop take only a few CPU clock cycles (on a JavaScript engine such as Chrome V8). If the compiler were consistent in applying this specialization rule, there would be no need for the `Native_Utils.cmp` function to do the check if the arguments are objects, but for safety's sake and considering that one extra check in the case of objects is likely negligible compare to the object processing, it may as well be left in for its true best use case of comparing objects of the various kinds.The Elm Compiler only deals with two primitive types: Int and Float (which are both actual Number/Float to JavaScript), which makes direct use of primitive operands very easyPart IIIn a similar way, the definition of the Bitwise library to emulate Haskell's definition of the Data.Bits library was silly for only five Int functions, made even worse by a name collision with the Bool `xor` operator. Because these are library functions, there is at least one level of function call for every use.Just as for the above function call for known primitive types (in this case only for Int), these functions should be added to the Basics library as operators with appropriate infix levels and with the names `&&&`, `|||`, `^^^` , `<<<`, and `>>>` just as for F# (which Elm emulates more and more), with the Elm compiler directly substituting the equivalent primitive JavaScript operators. The Bitwise library, which should never have existed, should be canned.Note that although this will make bitwise operators much faster than currently, programmers should be aware that there are many operations taking place under the covers that may not make these operations as efficient as possible alternate means: under the covers the arguments are and'ed to produce 32-bit values, the operation is carried out, and then the result is sign extended to convert back to a Number/Float, although for a sequence of bitwise operations, the JavaScript engines may only do the conversions at the beginning and the end of the sequence. Also, using bits in place of Bools does save space but doesn't save as much as perhaps thought: 32 bits are stored in a 64-bit Float, which may not save much space as compared to Bool's, especially as some JavaScript engines by automatically do the specialization. In other words, potential time and space savings must be tested. But in any case,these operations may as well be as efficient as possible and this above changes should be made.
On Saturday, 31 December 2016 03:25:53 UTC+7, art yerkes wrote:The promise of never needing to really get dirty and debug live Elm code is real, and it's true that nothing else quite lives up to that.
Amen to that: the advantage of being simple and placing restrictions on what one can do such that there is really only one way to do things, which can be carefully tested to not cause crashes or weird behaviour...
I wrote some notes on getting ghcjs up and running earlier. The basic trick is to ignore the initial README documentation and look on the wiki, following the directions for your specific haskell version. For 7.10:I used ghc 7.10, which doesn’t work out of the box with ghcjs. I cloned ghcjs master and used these instructions: https://github.com/ghcjs/ghcjs/wiki/GHCJS-with-GHC-7.10 from the wiki to install it properly. The quick start on the ghcjs readme doesn’t quite mention everything. In particular, you must bootstrap like:
ghcjs-boot --dev --ghcjs-boot-dev-branch master --shims-dev-branch master
Just for fun, I pasted the same code under BuckleScript playground:The generated code is below```jsfunction testProg(n) {var lmt = Pervasives.min(n + 100000000 | 0, 1000000000);var _i = n;while(true) {var i = _i;if (i >= lmt) {return i;}else {_i = i + 1 | 0;continue ;}};}```Note that BuckleScript compiler is able to do type specialization for generic comparison, also it heavily optimized curried calling convention
--
--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
That leaves me impressed with the ease-of-use for Elm, with my biggest wish list all concerned with making Elm faster; but it is still early days for the language as long as it is still evolving.
On Wed, Dec 28, 2016 at 3:10 PM, GordonBGood <gordo...@gmail.com> wrote:So you are saying we can already do the following:type Msg = ProgProgress Progress | ProgCancelled | ProgDone Answer | Tickport startLRP : () -> Cmd msgport cancelLRP : () -> Cmd msgsubscriptions model =Subs.batch[ lrpProgress ProgProgress, lrpCancelled ProgCancelled, lrpDone ProgDone, Timer.every 1000 Tick ]port lrpProgress : (Progress -> Msg) -> Sub Msgport lrpCancelled : (() -> Msg) -> Sub Msgport lrpDone : (Answer -> Msg) -> Sub Msgwith the Native JavaScript as per the link for each of the ports, Types all defined, update function handling all Msg cases, and the subscription system will automatically handle getting each subscription to the right Msg?Will this not cause problems with the Timer.every subscription?
This looks like valid code to me.
I would implement it differently, in a way where the model captures the current state of the process and the subscription uses that state to listen only to the relevant Subs but... that's more of an optimization.
Also, I assume that Answer and Progress are aliases to types that can travel the ports.
What kind of problems do you foresee with Timer.every?
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
Note that a high quality optimizing compiler like BuckleScript outperforms native JS significantly > Elm.
(See the benchmark here: http://bloomberg.github.io/bucklescript/slides/preview/bb-reveal-example-presentation.html#/5/1)
The only reason that I suggest these things is that I agree with Evan that JavaScript would ideally only be used as necessary in libraries, with the majority of applications never having to deal with it; however, with the current version there seems to be various applications where Elm code is not performant enough.
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Gordon,
Thanks for your lengthy reply.
I didn't try to convince you that OCaml is better than Haskell, they are different styles : ). It just feel a little weird that "when you want performance, you should switch from a statically typed language(elm here) into a dynamically typed language(js)", a decent compiler should produce much more efficient js output than hand written js, note that I have been working on BuckleScript compiler for only one year, there are still many low hanging fruits there and you can expect even better performance in the near future.
Some minor corrections to your comment
- I am quite familiar with Haskell, OCaml, and F# (did 3 years research in PL, I learned F# first, Haskell later and OCaml as the last one), the expressivity of type system in my opinion follow as below:
Haskell ~ OCaml > F# >> Elm
(OCaml has a fairly advanced object system with structural typing and row polymorphism which is incredibly useful to build FFI to JS objects)
Yes, I can see that OCaml's features, just if I were looked for maximum flexibility of type system I would prefer Haskell, but it isn't really a big concern for the kinds of things I would use Elm for, which system is adequate. This is why I could accept Fable ever BucketScript if it were as well developed as to ease of use and speed and stable even though not as expressive just because I know it better and like its relative simplicity.
- If syntax matters a lot, you may be interested in ReasonML, Facebook are working on a new syntax for OCaml and it works seamlessly with BuckleScript, some core ReactJS developers are also working on high quality ReactJS bindings
Happy New York and enjoy your favorite language!
I see that BuckleScript would work fine for JavaScript output and OCaml can be fast, but wouldn't int64 with two int's be a bit slow? It's just that i prefer Haskell syntax and capabilities more than OCaml as it just feels like a more modern language. I do like F# (and thus probabably Fable), but it isn't as pure a language as Haskell. I think I'll see what GHCJS can do for me, once I can get it installed.
On Saturday, December 31, 2016 at 8:09:28 PM UTC-7, GordonBGood wrote:I see that BuckleScript would work fine for JavaScript output and OCaml can be fast, but wouldn't int64 with two int's be a bit slow? It's just that i prefer Haskell syntax and capabilities more than OCaml as it just feels like a more modern language. I do like F# (and thus probabably Fable), but it isn't as pure a language as Haskell. I think I'll see what GHCJS can do for me, once I can get it installed.Unless you know any other way of representing int64 on javascript? An array of two integers is about the best you can do. Using a native javascript integer you get 32-bit. Using a native javascript number you get a 64-bit float (53-bits if I recall correctly of usable integer).
Also, OCaml and Haskell are about the same age, although OCaml is based on the older SML, but as for the 'feel' of it there are two things to note:1. OCaml's language is designed for fast parsing, like the code that Bob Zhang gave above compiles on 0.015 seconds on my machine here. Even very complex programs compile in seconds at most, compared to my 'usual' Haskell programs taking multiple minutes (or potentially hours on more complex programs that use a lot of HKT's). But near every decision of OCaml's syntax was designed to make for a *very* fast compiler (and elm was modeled as a mix of OCaml and Haskell syntax, see https://github.com/OvermindDL1/bucklescript-testing/blob/master/src/main_counter.ml as a working Elm example in OCaml).
2. There is a PPX (preprocessor 'essentially', but of the AST) called ReasonML that is OCaml with a fluffed up, more javascript'y (ew) syntax that many like if you want something more modern feeling, but it is still just OCaml.
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
let soe_loop top =
if top < 3 then Array.make 0 0 else
let ndxlmt = (top - 3) lsr 1 in
let cmpsts = Array.make ((ndxlmt lsr 5) + 1) 0 in
for loop = 1 to 1000 do (* do it many times for timing *)
let pir = ref 0 in
while !pir <= ndxlmt do
let pi = !pir in
let p = pi + pi + 3 in
let rec nxtc ci =
if ci > ndxlmt then () else
let w = ci lsr 5 in
cmpsts.(w) <- cmpsts.(w) lor (1 lsl (ci land 31));
nxtc (ci + p) in
let si = (p * p - 3) lsr 1 in
if si > ndxlmt then pir := ndxlmt + 1 else (
if cmpsts.(pi lsr 5) land (1 lsl (pi land 31)) == 0 then
nxtc si
cmpsts
function soe_loop(top) {
if (top < 3) {
return Caml_array.caml_make_vect(0, 0);
}
else {
var ndxlmt = ((top - 3 | 0) >>> 1);
var cmpsts = Caml_array.caml_make_vect((ndxlmt >>> 5) + 1 | 0, 0);
for(var loop = 1; loop <= 1000; ++loop){
var pir = 0;
while(pir <= ndxlmt) {
var pi = pir;
var p = (pi + pi | 0) + 3 | 0;
var nxtc = (function(p){
return function nxtc(_ci) {
while(true) {
var ci = _ci;
if (ci > ndxlmt) {
return /* () */0;
}
else {
var w = (ci >>> 5);
cmpsts[w] = cmpsts[w] | (1 << (ci & 31));
_ci = ci + p | 0;
continue ;
}
};
}
}(p));
var si = ((Caml_int32.imul(p, p) - 3 | 0) >>> 1);
if (si > ndxlmt) {
pir = ndxlmt + 1 | 0;
}
else {
if (!(cmpsts[(pi >>> 5)] & (1 << (pi & 31)))) {
nxtc(si);
}
pir = pi + 1 | 0;
}
};
}
return cmpsts;
}
}
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Gordon,
As you can see, BuckleScript already did a very good good job here, of course, there are still places for improvement. To write extremely high performance code, you should avoid creating a closure in a tight loop, here you can lift the `nxtc` into the top level, in the future, we will do this inside the compiler. Lets take further discussions off the list -- Hongbo
_ci = ci + p | 0<span style="color:#660" class="m_-2806577415036512984styled-by-p
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Indeed, in OCaml native backend, `for loop` still dominates the performance critical code since most optimizations does not work across function boundaries.There is still a long way for optimizing compiler to catch up with carefully tuned code, but BuckleScript does not get in your way, you can still write low level code with type safe guarantee in it when performance matters
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Gordon, thanks for your summary.Just want to add that BuckleScript compiler is only developed for one year, now I almost work full time on it (thanks to my employer), so you would expect more performance boost coming soon.
Personally, I don't mind any syntax, I am a huge fan of common lisp so you can tell. But I understand syntax does matter to quite a lot of people, so you may be interested in checking out ReasonML by Facebook(a more familiar syntax for OCaml). OCaml is a very modular compiler which has 7 IRs, ReasoML compiles to IR 0, while BuckleScript takes it from IR 4, so the combination of ReasonML and BuckleScript is seamless.
On your advice, I've reread the ReasonML documents as I got mislead by looking at ReasonML/Rebel and that is at too early a stage for me to consider (also support for BuckleScript is currently broken). I liked the ReasonML syntax much better than OCaml as it is more consistent, although it still has curly brackets and semicolons instead of white space delimited block, I can live with that as long as I have before as long as it is consistent. Unfortunately, I can't get the ReasonProject to install on my machine (Windows 10 64-bit) and I am not likely to pursue it at this time as it isn't that important to me.
I understand that if I were able to install it, by "working seamlessly with BuckleScript", you mean that the "bsb" command will just take the output of the previous ReasonML build as its input to produce JavaScript?
BTW, I see one of the reasons that BuckleScript produces JavaScript that is so much faster than that produced by Elm: you use JavaScript Array's as the base for almost all of the data structures whereas Elm uses tagged objects, which are very slow (at least on Google Chrome V9/nodejs); it almost seems like the Elm compiler just outputs dynamically typed code with the "all data is just one type - an object with tags" model, although it does recognize that numbers, characters, and strings will be picked up by JavaScript and don't need to be wrapped in a tagged wrapper object.
Its too bad you can't apply your compiler optimizations to Elm. Perhaps they could be, as Elm does produce an AST and is a very simple language so presumably the AST is fairly simple too.
As an aside, have you ever looked at using an asm.js back end as current mainline browsers mostly support it, and if so did it make any different to speed?
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:On your advice, I've reread the ReasonML documents as I got mislead by looking at ReasonML/Rebel and that is at too early a stage for me to consider (also support for BuckleScript is currently broken). I liked the ReasonML syntax much better than OCaml as it is more consistent, although it still has curly brackets and semicolons instead of white space delimited block, I can live with that as long as I have before as long as it is consistent. Unfortunately, I can't get the ReasonProject to install on my machine (Windows 10 64-bit) and I am not likely to pursue it at this time as it isn't that important to me.I'm actually not a fan of the Reason syntax myself, I consider it horribly verbose and noisy, but then again I've been comfortable with OCaml/SML syntax for over a decade now. However correct, Reason has no Windows support 'yet' (it is on their roadmap), however bucklescript works perfectly with its HEAD right now elsewhere (most of the windows issues are, oddly enough, because of npm stupidity actually). Bucklescript itself supports windows absolutely perfectly though (I was noisy about it at first when it was not ^.^).
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:BTW, I see one of the reasons that BuckleScript produces JavaScript that is so much faster than that produced by Elm: you use JavaScript Array's as the base for almost all of the data structures whereas Elm uses tagged objects, which are very slow (at least on Google Chrome V9/nodejs); it almost seems like the Elm compiler just outputs dynamically typed code with the "all data is just one type - an object with tags" model, although it does recognize that numbers, characters, and strings will be picked up by JavaScript and don't need to be wrapped in a tagged wrapper object.
That is one reason, but not the only one. A bigger reason is that bucklescript 'types' the javascript code, such as by putting `| 0` on integer operations and such, which allow most javascript VM's (V8 and especially Firefox's) to pre-JIT them significantly more efficiently. However it has more 'typing' that it can do, which I expect over time.
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:Its too bad you can't apply your compiler optimizations to Elm. Perhaps they could be, as Elm does produce an AST and is a very simple language so presumably the AST is fairly simple too.Bucklescript does not do the optimizations itself (well it does some javascript-specific one like higher arity functions and so forth), most of it is OCaml. *However*, you could write an Elm compiler that compiles to OCaml (which then could be compiled to native or to javascript-via-bucklescript), but the syntax's are so close as it is that it would be easier to just write an elm-like library for OCaml instead.
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:As an aside, have you ever looked at using an asm.js back end as current mainline browsers mostly support it, and if so did it make any different to speed?Actually that is what the typing stuff I mentioned above is, it is the asm.js decorations (though not full asm.js, things like '|0' is pretty universally supported, plus it is all backwards compatible). I could imagine OCaml compiling directly to webassembly later anyway, but for working with javascript being able to see the (very readable) output javascript via bucklescript is unmatched in usefulness.
That is actually why I think Elm should compile to a a different back-end, like ocaml/bucklescript or so. The syntax is uniform enough that making an elm->ocaml/bucklescript transpiler would be just a matter of re-using most of the existing parser in OCaml, which is already beyond blazing fast in comparison. It would significantly reduce elm's compiling time, it would get it to a back-end that has far far more optimizing passes than elm itself does while being substantially better tested, and it would give a method of being able to compile elm to bare-metal for very fast server-side page generation.
All that this would take would be to write an Elm parser into the first stage of the OCaml pipeline? You'd also need to compile the Native modules, is there already some way to feed them into the Ocaml pipeline?
--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.
Even a direct Elm to OCaml translation wouldn't be too hard. Elm is not the same as OCaml, but my understanding is that most of its features are included in Elm (row polymorphism, strict evaluation, first-class-functions). OCaml has lots of features Elm doesn't want (like mutable references) but that's not a problem, and could even allow for some nice backend optimizations.This would also provide a really nice way to do Elm on the backend. The big question is, how to write such a translator? Are the Haskell libraries for generating OCaml? Or would the compiler need to be written in OCaml?
But yes, translating Elm to OCaml/Bucklescript would not at all be a hard task. :-)
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
I'm wondering why the Elm compiler is so slow at parsing if that is where the slow-down is
Keep in mind that code is the easy part; the major thing standing between Elm and a different compilation target than JavaScript is 1-2 years of design work to figure out a quality user experience.
I'm wondering why the Elm compiler is so slow at parsing if that is where the slow-down isEvan recently rewrote the parser to be much faster.You can try a preview binary of the new version if you're curious. :)
I'm wondering why the Elm compiler is so slow at parsing if that is where the slow-down isEvan recently rewrote the parser to be much faster.You can try a preview binary of the new version if you're curious. :)I saw that over on elm-dev, but haven't tried it because compilation speed isn't a problem for the Elm code I have written so far. The only reason I brought it up is OvermindDL1's comment that compiling a Ocaml/BucketScript code (that presumably did the same thing as the Elm code) took about 0.1 seconds as compared to 40 seconds with the Elm compiler - a 400 times speed-up! We weren't given details of the code or test conditions and whether one was an incremental compilation, but that sounds quite serious and would affect the usability of Elm. If that data is verifiable, a speed up of double or even quadruple doesn't begin to touch the difference and should be investigated.
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
I would be very surprised if parsing is the bottle neck
In most cases, type checking and register allocations (could be quadratic) takes much more time. OCaml's type checking algorithms is very clever, almost linear in most practical use cases.
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
Hi Gordon,
It is not uncommon to see 100 times slowness when building Elm vs OCaml in incremental build (dev time), the reason is that Elm (correct me if I am wrong) always need a link time, so whenever your change a file, it will trigger the linker, this will get significantly worse if your project is in not small. While BuckleScript compiles one OCaml module to one ES6 module, it does not need link during dev time, best for incremental build.
--
you need recompile that module and regenerate a monolithic js file, the larger project it gets , the worse compilation time you get in elm mode. If you have experience in C++, you know the bottle neck used to be linking, it is quite common for a c++ project to spend 20minutes in linking.
--
All that this would take would be to write an Elm parser into the first stage of the OCaml pipeline? You'd also need to compile the Native modules, is there already some way to feed them into the Ocaml pipeline?Elm "Native" libraries are JavaScript, and that is what BuckleScript does: as well as output JS code, it also has JS FFI to allow BuckleScript code to call to/receive calls from JS code. I think this would be handled by an Elm PP just as OCaml handles FFI references - leaving FFI blanks to be later "filled in" by later passes.
On Monday, 16 January 2017 21:09:43 UTC+7, Rupert Smith wrote:On Saturday, January 14, 2017 at 1:51:39 AM UTC, GordonBGood wrote:All that this would take would be to write an Elm parser into the first stage of the OCaml pipeline? You'd also need to compile the Native modules, is there already some way to feed them into the Ocaml pipeline?Elm "Native" libraries are JavaScript, and that is what BuckleScript does: as well as output JS code, it also has JS FFI to allow BuckleScript code to call to/receive calls from JS code. I think this would be handled by an Elm PP just as OCaml handles FFI references - leaving FFI blanks to be later "filled in" by later passes.Yes, that is what I was getting at. If you compile Elm -> Ocaml -> Javascript, then the Native stuff in Elm just gets copied into the output javascript directly (unless of course passing it through Ocaml is worthwhile and can optimize it). But if you are going Elm -> Ocaml -> native x86_64 binary, then the javascript needs to be compiled through Ocaml. So I was just wondering if the Ocaml tool-chain already has a javascript front-end for it?
Writing a JavaScript front end for OCaml would be an ambitious undertaking because JavaScript does not resemble OCaml in the least and there wouldn't seem much point:
I saw that over on elm-dev, but haven't tried it because compilation speed isn't a problem for the Elm code I have written so far. The only reason I brought it up is OvermindDL1's comment that compiling a Ocaml/BucketScript code (that presumably did the same thing as the Elm code) took about 0.1 seconds as compared to 40 seconds with the Elm compiler - a 400 times speed-up! We weren't given details of the code or test conditions and whether one was an incremental compilation, but that sounds quite serious and would affect the usability of Elm. If that data is verifiable, a speed up of double or even quadruple doesn't begin to touch the difference and should be investigated.
If only there were a binary posted somewhere, based on a compiler that had just been rewritten to improve build times, so that someone could post a benchmark instead of speculation! ;)
you need recompile that module and regenerate a monolithic js file, the larger project it gets , the worse compilation time you get in elm mode. If you have experience in C++, you know the bottle neck used to be linking, it is quite common for a c++ project to spend 20minutes in linking.Considering Evan is working on asset management for the next release, I doubt "compile everything to one file" will be true after it lands. (That release is presumably still several months away; this is just speculation on my part.)
Yes, that is what I thought. I probably missed some context out when quoting, but my question was in response to OvermindDL1's suggestion that moving to OCaml would open up the possibility of compiling to different back-ends other than javascript.An alternative might be to re-write the Native modules in the Elm core in OCaml. There isn't a huge amount of it.
On Tuesday, January 17, 2017 at 4:25:08 AM UTC-7, Rupert Smith wrote:An alternative might be to re-write the Native modules in the Elm core in OCaml. There isn't a huge amount of it.Precisely this, I've already done quite a large chunk of it as a test and it translates very easily (and becomes type safe, which Elm's is not as I've hit 'undefined's in pure elm code before (already in the bug tracker, but why did they happen at all?!)). I kept it identical to the Elm API as well, though if I broke Elm's API in a couple of minor ways then I could *substantially* reduce the number of allocations done... But yes, I've rewrote most of Elm's native Core code as well as some third-party libraries like navigation, all without a touch of javascript and all of it type safe the whole way, mostly playing around but we ended up using a lot of it at work anyway (I still need to get around to cleaning it up and releasing it...).
In my view, this also provides very good justification for not allowing native code into packages.elm-lang.org. Porting Elm to another platform in this way is manageable.
I think everyone hangs around this and other Elm discussion forums because we love the Elm concept of a simple functional language that promises to eliminate the need to deal with JavaScript; but unfortunately, due to the compiler shortcomings, that promise is not delivered other than for the most basic of programs. I'm glad to see that Evan resists adding everyone's favourite feature to the language and actually continues to reduce syntax to a bare core of functionality.
Ideally, the Elm compiler would get completely re-written to both deal with the compilation speed issues (hopefully this work on "asset management" will handle that), but also to use the available static type information in the back end to generate much more efficient JS code as does BuckleScript (in most cases). This will be even a larger project as in order to get real JS code speed improvements for some cases, the memory model will have to be completely changed to something (or exactly) that of the BuckleScript back end. Come now, test tagged JS records as the primary data structure? So (as Even said) this is a big project as changes will have to be made to pass the type information to the code generator back end ***and*** completely re-write the back end to use that type information and while we are at it may as well change the JS code memory model to use JS Array's (no text tags) as the primary data structure as does BuckleScript. This may make the resulting JS code less debuggable, but that that isn't why we want to use Elm - we hope that all debugging can be done within the Elm environment.
Unfortunately and realistically, there seems to be only one major contributor to the Elm compiler and build system - Even himself - and he is under increasing pressure to do more timely updates in a variety of areas, not only as to code efficiency. Also, the plan as proposed above requires changes in at least two major parts of the compiler: the AST code builder and the back end Code Generator, so either one person needs to do both or there will be co-ordination involved. This work would precede any other necessary work on further compiler optimization passes a la BuckleScript.
As you say, the easiest thing to do would be just write an Elm2OCaml stand alone program which could then easily become the "pp" front end to produce an alternative Elm compiler to so much more efficient JS code through BuckleScript, with even more BuckleScript optimizations promised in the near future (or a native code alternative back end). Again as you say, it is very easy to write minimal JS interfaces in OCaml so that there then almost needs no Native code modules at all.
Unfortunately, if we do that in as short a time as you say is possible, work on the Elm compiler will likely never catch up to that effort, and Elm, the language, will become nothing but a language specification for an alternate front end to OCaml just as ReasonML is. In a way, I'd be sorry to see that happen as Elm could be an independent language force in its own right. Once Elm's core Native libraries have been re-written into the OCaml environment, the ease of use of the resulting combination will likely mean that most serious users will choose that development environment, which then splits development efforts, which was the cause of (at least near) death or many other capable languages (D comes to mind).
Perhaps this is the best alternative, as then Evan and other major contributors could concentrate on refining the language spec without the drain on their limited time to also work on the implementation of the language environment.If we want to prevent this, we need more contributors to Evan's work on compiler upgrades, if that is possible, rather than an Elm front end for OCaml.
It seems to me that the old rule of not optimizing early doesn't apply to compilers, at least as to choice of memory model for the code generator and as to not thinking that type information is essential for efficient (and reliable) back end code. Having to rectify those omissions now is a lot of work!In fact, Fable is going through the same output code efficiency problems made worse because its goal is to support the full more-complex-then-Elm F# language specification: its back end memory model is very similar to that of Elm and the resulting code is up to about six or seven times slower than as produced by the same algorithms for BuckleScript - something as Elm output code is; Fable also seems slow to compile. One problem that both Fable and Elm must address is a consistent way to handle argument currying: Fable does this by applying use cases, where some types of functions are always curried (with an execution time overhead) and other types are not; Elm handles this by allways pseudo-currying in using hidden wrapper JS functions to apply wrapped functions to fixed numbers of arguments at a time as determined by the program context, but again at a const in performance (although perhaps less than Fables more direct multi-level currying).
I think everyone hangs around this and other Elm discussion forums because we love the Elm concept of a simple functional language that promises to eliminate the need to deal with JavaScript; but unfortunately, due to the compiler shortcomings, that promise is not delivered other than for the most basic of programs.
Unfortunately and realistically, there seems to be only one major contributor to the Elm compiler and build system - Even himself - and he is under increasing pressure to do more timely updates in a variety of areas, not only as to code efficiency. Also, the plan as proposed above requires changes in at least two major parts of the compiler: the AST code builder and the back end Code Generator, so either one person needs to do both or there will be co-ordination involved. This work would precede any other necessary work on further compiler optimization passes a la BuckleScript.
I don't understand this. Elm currently has better code output than Babel and Typescript. Choosing Elm over those gives me faster applications (though, I've never needed more speed) as well as smaller bundles. An application written with React+Immutable+Moment, will have much more code than an equivalent Elm application, it will also be much slower unless you have steel discipline and are willing to write more code. Elm's compiler is also faster than both Babel and Typescript, and compiler speed will get *much* faster in the next release. In my experience, Elm is already better than Javascript in every conceivable way, and that's before taking static typing into account. True, I don't write games, but if I did I probably wouldn't do it in an immutable language due to garbage collector concerns. Depending on the game, I wouldn't even write it in a javascript environment due to the lack of threads.
Why would you want arrays instead of tagged-objects as the primary data-structure? Just because Bucklescript does it doesn't make it faster. Try benchmarking it yourself. I did. Depending on the browser, accessing and/or changing an array isn't necessarily faster than accessing/altering a javascript object.
Finally, many people today are using Elm in production. There isn't a general consensus amongst Elm's users that the language is too slow, outputs too much code or is slow to compile (a cold compile of my app takes ~9seconds. That is NOTHING compared to an equivalent typescript application I'm working on, and isn't noticed in practice because of incremental compiles). The problem with Elm (if you indeed could call it a problem) is the lack of features. Many people would like better asset management, which is why Evan is working on it. Personally, I would like some sort of reflection support, as well as nested record syntax. Actually, I would gladly take release of the local-storage module before any of those, and re-connect notifications in elm-websockets.
The short of it is, the problems you're mentioning in this thread aren't problems for the vast majority of Elm developers. Had it been, they would've been addressed. Personally, I'm glad Evan isn't focused on the things you've proposed. There are other things I want that I'm glad is having a higher priority.
Bucklescript de-curries as much as possible, however you can also force it in the type system explicitly by adding the annotation type of `[@bs]` to a function (type) definition, that enforces uncurrying at the type level and will even propagate in usage as expected to make sure accidental currying of it is not done (though you can still explicitly curry it by wrapping it in a curried type). In most cases it de-curries very well and you never need to use `[@bs]` (the only real time I've used it is on DOM callback registrations with more than one argument to make sure I do not accidentally pass a curried version to one, never used it in 'user' code as of yet).