Elm "faster than JavaScript" (version 0.18) - NOT - Parts I and II...

3,220 views
Skip to first unread message

GordonBGood

unread,
Dec 26, 2016, 12:44:49 PM12/26/16
to Elm Discuss
Synopsis:  Some, including  Evan, maintain that Elm can be "faster than JavaScipt".  While that may be true for some use cases including use of perhaps more efficient UI updates due to compartmentalized use of VirtualDOM, the actaul Javascript code generated by the compiler  is not very efficient for many/most tight code cases.  The reason is often unnecessary nested function calls that could easily be eliminated by making full use of the type information that the Elm compiler has.

Part I

An example;  The following tight loop doesn't really do anything, so should therefore compile into the very tightest of code (and I'm not expecting the Elm compiler to recognize that the result is actually known at compile time):

range : Int
range = 1000000000

testProg : Int -> Int
testProg n = -- do some work
  let lmt = min (n + 100000000) range in
  let loop i =
    if i >= lmt then i else
    loop (i + 1) in loop n

which compiles to the following JavaScript:

var _user$project$Temp1482759649866537$range = 1000000000;
var _user$project$Temp1482759649866537$testProg = function (n) {
var lmt = A2(_elm_lang$core$Basics$min, n + 10000000000, _user$project$Temp1482759649866537$range);
var loop = function (i) {
loop:
while (true) {
if (_elm_lang$core$Native_Utils.cmp(i, lmt) > -1) {
return i;
} else {
var _v0 = i + 1;
i = _v0;
continue loop;
}
}
};
return loop(n);
};
All right, the code looks fairly good, and we can see that for the inner `loop` function that the compiler used its new capability to do tail call optimization and turn it into a while loop.  Also, one might expect that any decent JIT compiler such as Chrome V8 will use constant folding and get rigd of the `_v0 variable.  However, the real limitation of this loop is the call to the `Native_Utils.cmp` function.  Function calls are expensive at 10's of CPU clock cycles each!

The pertinent JavaScript for `Native_Utils.cmp` is as follows:

var LT = -1, EQ = 0, GT = 1;

function cmp(x, y)
{
if (typeof x !== 'object')
{
return x === y ? EQ : x < y ? LT : GT;
}
...

Note that there are three native branches here (in addition to the enclosing one):  one for the check to see it the arguments are objects (which of course they are not in the case of Int's as here or Float's as the compiler well knows), one to check if they are equal. and (given that generally they won't be equal most of the time) one to see which is greater.  Now these conditions are not so bad by themselves as they are very predictable for modern CPU's branch prediction (i is almost always < lmt), so that will cost at most a few CPU cycles; However, the call to the function in the first place will cost 10's of CPU clock cycles!

Given that the compiler already knows and strictly enforces that the arguments are both Int's or Float's (which are both just Numbers in JavaScript), there is no reason that it cannot directly output (i >= lmt) instead of the function call and make the whole inner loop take only a few CPU clock cycles (on a JavaScript engine such as Chrome V8).  If the compiler were consistent in applying this specialization rule, there would be no need for the `Native_Utils.cmp` function to do the check if the arguments are objects, but for safety's sake and considering that one extra check in the case of objects is likely negligible compare to the object processing, it may as well be left in for its true best use case of comparing objects of the various kinds.

The Elm Compiler only deals with two primitive types:  Int and Float (which are both actual Number/Float to JavaScript), which makes direct use of primitive operands very easy

Part II

In a similar way, the definition of the Bitwise library to emulate Haskell's definition of the Data.Bits library was silly for only five Int functions, made even worse by a name collision with the Bool `xor` operator.  Because these are library functions, there is at least one level of function call for every use.

Just as for the above function call for known primitive types (in this case only for Int), these functions should be added to the Basics library as operators with appropriate infix levels and with the names `&&&`, `|||`, `^^^` , `<<<`, and `>>>` just as for F# (which Elm emulates more and more), with the Elm compiler directly substituting the equivalent primitive JavaScript operators.  The Bitwise library, which should never have existed, should be canned.

Note that although this will make bitwise operators much faster than currently, programmers should be aware that there are many operations taking place under the covers that may not make these operations as efficient as possible alternate means:  under the covers the arguments are and'ed to produce 32-bit values, the operation is carried out, and then the result is sign extended to convert back to a Number/Float, although for a sequence of bitwise operations, the JavaScript engines may only do the conversions at the beginning and the end of the sequence.  Also, using bits in place of Bools does save space but doesn't save as much as perhaps thought:  32 bits are stored in a 64-bit Float, which may not save much space as compared to Bool's, especially as some JavaScript engines by automatically do the specialization.  In other words, potential time and space savings must be tested.  But in any case,these operations may as well be as efficient as possible and this above changes should be made.

Robin Heggelund Hansen

unread,
Dec 26, 2016, 5:37:43 PM12/26/16
to Elm Discuss
Part I: Evan is already aware of this issue, you can see the full discussion in this github issue: https://github.com/elm-lang/elm-compiler/issues/1528
The short of it is that the code generator part of the compiler is not aware of type information, and adding this would be a serious undertaking. It will probably come at some point, as there are interesting optimizations that can be made, like the one mentioned.

Part II: Bitwise operators are inlined in 0.18, which gave a noticable performance boost in libraries like Skinney/elm-array-exploration and mgold/random-pcg. Having it available as a function (just like +, -, * etc.) allows more flexibility like currying.

GordonBGood

unread,
Dec 26, 2016, 9:41:29 PM12/26/16
to Elm Discuss


On Tuesday, 27 December 2016 05:37:43 UTC+7, Robin Heggelund Hansen wrote:
Part I: Evan is already aware of this issue, you can see the full discussion in this github issue: https://github.com/elm-lang/elm-compiler/issues/1528
The short of it is that the code generator part of the compiler is not aware of type information, and adding this would be a serious undertaking. It will probably come at some point, as there are interesting optimizations that can be made, like the one mentioned

Although it is good to see that Even is aware of the issue, it is discouraging to learn that the type information is not retained by the code generator and therefore this improvement is part of a "huge project"; thus the linked issue has been closed.  The person who raised the original issue developed some timing results that showed only about a 20% speed improvement for the particular code use.  However, when I compare the above tight loop written directly in JavaScript, the difference here is about 700%!  Of course, real use cases won't be quite this large, but I could see a fairly tight loop being 300% (+/- 100%) faster.
 

Part II: Bitwise operators are inlined in 0.18, which gave a noticable performance boost in libraries like Skinney/elm-array-exploration and mgold/random-pcg. Having it available as a function (just like +, -, * etc.) allows more flexibility like currying.

 I checked and you are right that these functions are now inlined.

The only other thing they should also have is infix for those with two operands for better (conventional) clarity of code, but this is no biggy.

It turns out that the huge slowdown as compared to equivalent JavaScript code is another instance of the Part I problem as it is caused by a call to `Native_Utils.eq` which is even less efficient than `Native_Utils.cmp` resulting in about a 1500% slowdown compared to JavaScript for the following code:

import Bitwise as Bw

testProg : Int -> Int
testProg n = -- do some work
  let loop i =
    if Bw.and i 0x3FFFFFFF == 0x3FFFFFFF then i else
    loop (i + 1) in loop n

which compiles to:

var _user$project$Temp1482804013226255$testProg = function (n) {
var loop = function (i) {
loop:
while (true) {
if (_elm_lang$core$Native_Utils.eq(i & 1073741823, 1073741823)) {
return i;
} else {
var _v0 = i + 1;
i = _v0;
continue loop;
}
}
};
return loop(n);
};

with  `Native_Utils.eq` defined as:

function eq(x, y)
{
var stack = [];
var isEqual = eqHelp(x, y, 0, stack);
var pair;
while (isEqual && (pair = stack.pop()))
{
isEqual = eqHelp(pair.x, pair.y, 0, stack);
}
return isEqual;
}


function eqHelp(x, y, depth, stack)
{
if (depth > 100)
{
stack.push({ x: x, y: y });
return true;
}

if (x === y)
{
return true;
}
...

For this use case, there are a succession of conditions which won't cost that much as explained in the opening post, but it is twice as slow as the example in Part I because there are typically two nested calls of the functions `Native_Utils.eq` with `Native_Utils.eqHelp` called from within it!  This is the reason it is twice as slow as the example in Part I.

If and when the issue as of Part I is fixed, then bitwise operations will be fixed too.

Meanwhile, for real work programs with intensive bitwise tight loop computations, Elm is likely up to about 10 times slower than JavaScript and for general numeric tight loop calculations about 5 times slower, which I find unacceptable.

Robin Heggelund Hansen

unread,
Dec 27, 2016, 8:01:29 AM12/27/16
to Elm Discuss
I was the one who raised the original issue. The reason there is only a 20% performance in the given example is because the comparison operator isn't the bottleneck of the code, which tests retrieving an object from an immutable array with 10.000 elements. What you are testing is essentially only measuring the overhead of a comparison operation. Still, my benchmark is able to run at 7,5 million ops per second, running on a macbook air from 2013. This is fast enough for most web apps, and this brings me to my next point.

Elm is, at least currently, a language to make web applications. How often are you going to run calculations in tight loops where the performance impact of comparison or equality operators gives a noticeable performance impact for the end user in a web browser?

I'm not saying that the proposed optimisation isn't worthwhile, it is. But Elm, being a 0.x language still, has a lot more important things to improve on before it turns its attention back to performance, which is already very good. Things like hot-code reloading, improved package manager and bug fixes.

Noah Hall

unread,
Dec 27, 2016, 8:11:21 AM12/27/16
to elm-d...@googlegroups.com
Tight calculation loops and performance matters a lot in some cases
such as complex form validation. I've spoken with at least one company
suffering these bottlenecks with Elm, but it was resolvable by just
changing how the data is modelled.

For performance in production, if you're not running a game, then the
longest computational time is likely from multiple re-renders. Which
is thankfully not an issue. TBH, we do not care in production if Elm
is the "fastest". We care if it is "fast enough", which it is.
> --
> You received this message because you are subscribed to the Google Groups
> "Elm Discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elm-discuss...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Mark Hamburg

unread,
Dec 27, 2016, 12:48:17 PM12/27/16
to elm-d...@googlegroups.com
This could, however, feed into the questions over Elm's interface to JavaScript. The easier it is to escape to JavaScript, the easier it is to say that Elm is fast enough for the vast majority of code that matters and if that doesn't cover a tight loop in your code, you can write just that part in JavaScript.

I've spent a long time working in Lua which has a fairly fast implementation as scripting languages go, but also had a very straightforward and powerful escape hatch to C. The former meant that one could expect to write most logic in Lua and not worry about performance. The latter meant that if performance were a problem, it would be easy to address when it came up.

The net from this for Elm is that I would argue that compiler optimizations can focus on easy wins provided that it is easy to move a computation to JavaScript when that doesn't work. At this point, however, that isn't possible because launching a computation through a command and waiting for a result back on a subscription where that result must be distinguished from all other results from other command driven requests is a mess. At the very least, please give us task ports so that we can readily fire off asynchronous computations with results — though for use as an escape hatch when Elm is too slow, I can feel the pressure for a synchronous native code extension mechanism.

Mark

GordonBGood

unread,
Dec 27, 2016, 10:03:24 PM12/27/16
to Elm Discuss


On Tuesday, 27 December 2016 20:01:29 UTC+7, Robin Heggelund Hansen wrote:
Elm is, at least currently, a language to make web applications. How often are you going to run calculations in tight loops where the performance impact of comparison or equality operators gives a noticeable performance impact for the end user in a web browser?

Oh, I can see applications where the performance of tight loops could be important, from games to intensive math applications to something involving a lot of graphics manipulation.  Currently, Elm isn't a very good fit for any of these, but with the proposed changes they could be. Then its main limitation other than anything involving libraries) would be running on parallel cores, but as that is a limitation of JavaScript, I don't see it getting fixed anytime soon.

Richard Feldman

unread,
Dec 27, 2016, 10:36:23 PM12/27/16
to Elm Discuss
This is a lot of incendiary rhetoric around what amounts to a request for some niche performance optimizations that Evan sensibly decided were not worth prioritizing right now. I'm really glad he's working on asset management instead of this.

I definitely could have done without the sneering title of this post.

Robin Heggelund Hansen

unread,
Dec 27, 2016, 10:59:39 PM12/27/16
to Elm Discuss
I'm not saying that performance of tight loops aren't important, what I'm questioning is how much comparison and equality operators affect them. I would assume that higher-order functions and immutability has a much bigger impact on tight loops in games, graphics and intensive math applications.

mandag 26. desember 2016 18.44.49 UTC+1 skrev GordonBGood følgende:

GordonBGood

unread,
Dec 28, 2016, 12:40:38 AM12/28/16
to Elm Discuss
On Wednesday, 28 December 2016 00:48:17 UTC+7, Mark Hamburg wrote:
This could, however, feed into the questions over Elm's interface to JavaScript. The easier it is to escape to JavaScript, the easier it is to say that Elm is fast enough for the vast majority of code that matters and if that doesn't cover a tight loop in your code, you can write just that part in JavaScript.

I've spent a long time working in Lua which has a fairly fast implementation as scripting languages go, but also had a very straightforward and powerful escape hatch to C. The former meant that one could expect to write most logic in Lua and not worry about performance. The latter meant that if performance were a problem, it would be easy to address when it came up.

The net from this for Elm is that I would argue that compiler optimizations can focus on easy wins provided that it is easy to move a computation to JavaScript when that doesn't work. At this point, however, that isn't possible because launching a computation through a command and waiting for a result back on a subscription where that result must be distinguished from all other results from other command driven requests is a mess. At the very least, please give us task ports so that we can readily fire off asynchronous computations with results — though for use as an escape hatch when Elm is too slow, I can feel the pressure for a synchronous native code extension mechanism.

I think the main point to Elm is that one doesn't have to deal with the messiness of JavaScript, nor should one as Elm compiles to JavaScript which can run at close to machine code speed with a good JavaScript engine such as Chrome V8 if the Elm compiler generated efficient JavaScript; this is in contrast to languages such as (interpreted) Lua or Python which run 10's of times slower.

I do agree that writing new subscriptions seems unnecessarily complex, involving writing Event Managers, which Even discourages as he thinks " Elm needs maybe 10 of them total".  Due to this, it seems to me that one can't write subscriptions entirely in Elm with a much simpler mechanism to distinguish the source, which might be a useful capability and might satisfy your requirement as well, as then a subscription written in Elm could still call JavaScript through ports.

I think the whole subscription issue stems from some "muddy" thinking about the relationship between Task's, Process's, and message passing, which needs to be ironed out before things can be made easier.  And what will Elm do to compete with that elegant Promise/asynch/await model in (modern) JavaScript?

But we stray from the subject of "Elm as fast as JavaScript"...

My point is that I love the practicality of Elm with its simple yet functional syntax and would like never to be forced to use JavaScript or even Typescript again, but am forced to do so for certain types of applications due to these inefficiencies.

Peter Damoc

unread,
Dec 28, 2016, 1:49:34 AM12/28/16
to Elm Discuss
On Wed, Dec 28, 2016 at 6:40 AM, GordonBGood <gordo...@gmail.com> wrote:
But we stray from the subject of "Elm as fast as JavaScript"...

In my memory, Evan said that Elm can potentially be faster than JS. This means some future where Elm is wide enough spread and wasm wide enough adopted to have the resources and the context for an efficient compiler that bypasses JS. This is years away so... what are we discussing here? 

Personally, I believe in the mantra of "Make It Work, Make It Right, Make It Fast". 

Elm's speed is not a show stopper for the vast majority of Elm developers. Sure, it might prevent some potential users from taking Elm seriously for certain tasks but... what is better? catering for the needs of a very small set of potential users or catering for the needs of the vast majority of actual Elm users? 

My point is that I love the practicality of Elm with its simple yet functional syntax and would like never to be forced to use JavaScript or even Typescript again, but am forced to do so for certain types of applications due to these inefficiencies.

Or you can look at it from the other way: for certain types of applications you can already use a practical, simple language (Elm). :) 


--
There is NO FATE, we are the creators.
blog: http://damoc.ro/

Mark Hamburg

unread,
Dec 28, 2016, 2:18:13 AM12/28/16
to elm-d...@googlegroups.com
My point was that the calls for Elm optimization could largely be mitigated through a combination of fast enough for most purposes (arguably already there) coupled with a reasonable escape hatch to the host environment (as of now JavaScript and not really there) for the cases when it isn't. I'm suggesting that rather than investing in fancy compiler changes, some changes in the runtime architecture to allow performance intensive central loops to be written in other ways would be cheaper, simpler, and would arguably address some of the other issues that have floated up recently.

Elm has a weird ambivalence about native code. On the one hand, Evan has pushed back pretty hard in the past out of fear of getting tied to crappy JavaScript libraries and has pushed for doing everything in Elm. This is reflected in the relatively small amount of native code based work in the standard Elm package repository. On the other hand, when pushed on how one is supposed to do something that is currently difficult in Elm — e.g., some MDL conventions — others in the community have pushed for using native code rather than trying to write it in Elm.

My recommendation would be a middle path that says we believe most things are better in Elm but here are its limitations and here is how you can cleanly and easily reach outside of Elm when those limitations prove to be a problem. While the currently documented mechanisms may be clean, they aren't easy for many cases. Ports are extremely awkward for cases that require both a request and a response to that request. Effects managers are undocumented and discouraged. And native modules themselves are barely even admitted to let alone documented as a thing. Improve that situation and any concerns over both performance and interfacing to the broader world are substantially mitigated and Elm can better focus on its core mission. Fail to improve that situation and every effort to grow the market for Elm will also grow the pressure for it to do everything well.

Mark
--

GordonBGood

unread,
Dec 28, 2016, 2:30:32 AM12/28/16
to Elm Discuss


On Wednesday, 28 December 2016 10:59:39 UTC+7, Robin Heggelund Hansen wrote:
I'm not saying that performance of tight loops aren't important, what I'm questioning is how much comparison and equality operators affect them. I would assume that higher-order functions and immutability has a much bigger impact on tight loops in games, graphics and intensive math applications.

As to less speed from Higher Order Functions, you are right that these are slow due to (often nested) function calls, but just as for other languages one doesn't have to use them for time critical inner loops especially since tail call recursion optimization within the same function now works.

You are right (again) that immutability has a great effect on the speed of these types of applications, but immutable data structures are implemented as libraries that can be improved so that this exterior immutability may not be a bottleneck.  For instance, many such applications are going to need fast Array's, for which the current immutable Array library is far from ideal.  Haskell takes care of this by providing state monads that allow mutability whilst threading the state monad through the mutating code.  I am not suggesting that Elm needs the complexity of full monads, but must of the needs for mutability could be handled in the creation of the immutable data structures.  Thus, addition of the equivalent to Haskell's accumArray`, `accum`, and (//)` to the Elm Array library could make Array operations for many of these much faster.  In Haskell, these take a (lazy) list of `(index, value)` pairs as an argument which wouldn't be very efficient with Elm's (non-lazy) lists, but providing in-line functions that work mutably only inside these Array create/update routines wouldn't be that hard.  Elm's record updates work on that principle already:  mutating a copy of the record only when the new (immutable) copy is created - in effect making the assignment operator mutable only inside the update.  Alternately and even simpler syntactically would be using a very efficient version of a lazy list and direct translations of the Haskell functions, but that wouldn't be as fast as it would require at least one function call and tuple destructure per array element modified.

There are other bottlenecks:  array bounds checks and so on, but eventually the Elm compiler might get smart enough to recognize when these are inherent to the code as other languages do.  Even if not optimized away, as seen in the examples here conditional code is a minor overhead on a modern CPU as compared to function calls.

For its many benefits, I would accept Elm code that is up to twice as slow as raw JavaScript, but not more than that.  I believe that it is possible to somewhat achieve better than that while retaining all of Elm'g design goals through compiler/library improvements

GordonBGood

unread,
Dec 28, 2016, 2:41:01 AM12/28/16
to Elm Discuss


On Wednesday, 28 December 2016 14:18:13 UTC+7, Mark Hamburg wrote:
My point was that the calls for Elm optimization could largely be mitigated through a combination of fast enough for most purposes (arguably already there) coupled with a reasonable escape hatch to the host environment (as of now JavaScript and not really there) for the cases when it isn't. I'm suggesting that rather than investing in fancy compiler changes, some changes in the runtime architecture to allow performance intensive central loops to be written in other ways would be cheaper, simpler, and would arguably address some of the other issues that have floated up recently.

Elm has a weird ambivalence about native code. On the one hand, Evan has pushed back pretty hard in the past out of fear of getting tied to crappy JavaScript libraries and has pushed for doing everything in Elm. This is reflected in the relatively small amount of native code based work in the standard Elm package repository. On the other hand, when pushed on how one is supposed to do something that is currently difficult in Elm — e.g., some MDL conventions — others in the community have pushed for using native code rather than trying to write it in Elm.

My recommendation would be a middle path that says we believe most things are better in Elm but here are its limitations and here is how you can cleanly and easily reach outside of Elm when those limitations prove to be a problem. While the currently documented mechanisms may be clean, they aren't easy for many cases. Ports are extremely awkward for cases that require both a request and a response to that request. Effects managers are undocumented and discouraged. And native modules themselves are barely even admitted to let alone documented as a thing. Improve that situation and any concerns over both performance and interfacing to the broader world are substantially mitigated and Elm can better focus on its core mission. Fail to improve that situation and every effort to grow the market for Elm will also grow the pressure for it to do everything well.

Mark, you raise a good point:  as an interim measure, make it easier to generally interface to JavaScript including easily adding subscriptions if necessary.  But that will only happen if Evan and others see it that way, which they may not if they see it as "Elm is already good enough (for our use)".

Mark Hamburg

unread,
Dec 28, 2016, 3:06:33 AM12/28/16
to elm-d...@googlegroups.com
Just to make it clear, I'm not particularly calling for an easier way to create "subscriptions". I'm calling for a way to do one of the following — either is fine, each have their pluses and minuses:

1. Expose a synchronous, externally defined function that takes JSON (or really anything a command port supports) in and returns JSON (or really anything a subscription port supports) out.

- or -

2. Expose a factory for externally defined asynchronous tasks where the task when executed receives JSON (etc) in and resolves to JSON (etc) when finished.

The former is easier to integrate into general computation. The latter more clearly reflects that the computation occurs "elsewhere" and must be managed as such.

Ports as they exist in 0.18 do not appear to offer this. If you want to make a request and receive a response, you need to make the request through a command port and receive the response through a subscription port. If you want to make multiple requests and receive multiple responses, you must do something to match responses with the code waiting for them which generally means generating and managing some form of ID's. The code to do that starts to look a lot like an effects manager, but the documentation discourages writing those. One could probably make this work via an out message based design with a pseudo-effects manager at the top level to manage ID's and talk to the ports but note that this pseudo-effects manager will be part of the model and will be storing functions — something that there is also advice to avoid doing.

Or if ports are a sufficient answer and I'm just not seeing it, then maybe we need more and better examples for, in the case of this thread, how you move an expensive computation out of Elm and over to JavaScript.

Mark
--

Peter Damoc

unread,
Dec 28, 2016, 3:33:25 AM12/28/16
to Elm Discuss
On Wed, Dec 28, 2016 at 9:06 AM, Mark Hamburg <mhamb...@gmail.com> wrote:
Just to make it clear, I'm not particularly calling for an easier way to create "subscriptions". I'm calling for a way to do one of the following — either is fine, each have their pluses and minuses:

1. Expose a synchronous, externally defined function that takes JSON (or really anything a command port supports) in and returns JSON (or really anything a subscription port supports) out.
- or -
2. Expose a factory for externally defined asynchronous tasks where the task when executed receives JSON (etc) in and resolves to JSON (etc) when finished.

1. would break the purity guarantee. 

2. this looks like the mechanism for scheduling messages. In short, if one could schedule a message to be delivered after the execution of some JS function, one could accomplish what you want. 

I'm not familiar with how difficult would be to expose this functionality in the ports world. The functionality exists in the Native layer as one can see in the Native implementation of Time.now but exposing it in a safe and easy way through the ports API might not be trivial. 





Zachary Kessin

unread,
Dec 28, 2016, 3:40:05 AM12/28/16
to elm-discuss
The problem with the escape hatch idea is that it breaks pretty much of all elm's promises. So for the 0.5% of cases where speed is the most important factor I don't want to see us break the purity and guarantees of the rest of the language.

Zach

--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Zach Kessin
Twitter: @zkessin
Skype: zachkessin

GordonBGood

unread,
Dec 28, 2016, 4:29:13 AM12/28/16
to Elm Discuss
It looks like  the implementation of this StackOverflow answer (ignore the question and relevance as it is old, but the calling a function `getParentPos ()` to get some answers through some JavaScript called through a port, which returns the answers through a subscription `parentPos`that fires a message to update should still work.  This is fine as long as there is only one subscription, but as Mark says there is no Event Manager so it would likely get confused if there were other subscriptions such as say `Timer.every`.

If this worked generally, it would be an easy way to transfer to a long running program written in JavaScript, with that long running program able to fire progress reports back and even a different completion Msg, but would be fairly useless if it didn't work with other subscriptions without adding a lot of code.

Peter Damoc

unread,
Dec 28, 2016, 4:44:45 AM12/28/16
to Elm Discuss
On Wed, Dec 28, 2016 at 10:29 AM, GordonBGood <gordo...@gmail.com> wrote:
It looks like  the implementation of this StackOverflow answer (ignore the question and relevance as it is old, but the calling a function `getParentPos ()` to get some answers through some JavaScript called through a port, which returns the answers through a subscription `parentPos`that fires a message to update should still work.  This is fine as long as there is only one subscription, but as Mark says there is no Event Manager so it would likely get confused if there were other subscriptions such as say `Timer.every`.

If this worked generally, it would be an easy way to transfer to a long running program written in JavaScript, with that long running program able to fire progress reports back and even a different completion Msg, but would be fairly useless if it didn't work with other subscriptions without adding a lot of code.


But you can do that already with subscriptions. I mean, you can have a subscription for every message you want to send to your Elm program. 
To my understanding you can trigger the long running with a Cmd port and listen on various other Sub ports for intermediary answers and completions. 
You can use one port and push the data in a way that can generate multiple types of messages (progress, completion) or you can use multiple ports (e.g. one for progress, one for completion) 


GordonBGood

unread,
Dec 28, 2016, 9:10:23 AM12/28/16
to Elm Discuss
So you are saying we can already do the following:

type Msg = ProgProgress Progress | ProgCancelled | ProgDone Answer | Tick

port startLRP : () -> Cmd msg

port cancelLRP : () -> Cmd msg

subscriptions model =
  Subs.batch
    [ lrpProgress ProgProgress
    , lrpCancelled ProgCancelled
    , lrpDone ProgDone
    , Timer.every 1000 Tick ]

port lrpProgress : (Progress -> Msg) -> Sub Msg

port lrpCancelled : (() -> Msg) -> Sub Msg

port lrpDone : (Answer -> Msg) -> Sub Msg

with the Native JavaScript as per the link for each of the ports, Types all defined, update function handling all Msg cases, and the subscription system will automatically handle getting each subscription to the right Msg?

Will this not cause problems with the Timer.every subscription?

If so, I take it that the only thing that stops us from being able to trigger an event from Elm code is the Elm type checker, and that if we had just one port out to Javascript with arguments as to which return port to fire with what arguments (or one return port with the logic you suggested) in a library, then we would be able to fire events from Elm code at will using that Native "crosser" routine?

If we can do all of that, I don't see what Mark is worried about?  We don't have to have an Event Manager?  What's the point of a Router/Event Manager?  Ah, I think it's to do with queue'ing messages to process later whereas this will process them immediately?

If this all works, we could write LRP's in JavaScript or optionally move them to Elm when it gets more efficient.

Peter Damoc

unread,
Dec 28, 2016, 9:23:19 AM12/28/16
to Elm Discuss
On Wed, Dec 28, 2016 at 3:10 PM, GordonBGood <gordo...@gmail.com> wrote:

So you are saying we can already do the following:

type Msg = ProgProgress Progress | ProgCancelled | ProgDone Answer | Tick

port startLRP : () -> Cmd msg

port cancelLRP : () -> Cmd msg

subscriptions model =
  Subs.batch
    [ lrpProgress ProgProgress
    , lrpCancelled ProgCancelled
    , lrpDone ProgDone
    , Timer.every 1000 Tick ]

port lrpProgress : (Progress -> Msg) -> Sub Msg

port lrpCancelled : (() -> Msg) -> Sub Msg

port lrpDone : (Answer -> Msg) -> Sub Msg

with the Native JavaScript as per the link for each of the ports, Types all defined, update function handling all Msg cases, and the subscription system will automatically handle getting each subscription to the right Msg?

Will this not cause problems with the Timer.every subscription?


This looks like valid code to me. 
I would implement it differently, in a way where the model captures the current state of the process and the subscription uses that state to listen only to the relevant Subs but... that's more of an optimization.  

Also, I assume that Answer and Progress are aliases to types that can travel the ports. 

What kind of problems do you foresee with Timer.every? 


Max Goldstein

unread,
Dec 28, 2016, 10:25:15 AM12/28/16
to Elm Discuss
Regarding the pain of wiring up a subscription port, have you seen Task.perform <http://package.elm-lang.org/packages/elm-lang/core/latest/Task#perform>? It allows you to do async work using no ports other than The Elm Architecture itself.

As for doing pure but computationally intensive work in a Task, you can do it in Elm:

Task.succeed () |> Task.andThen \_ -> do hard stuff

This won't work for inner render loops, and the asynchronous Elm will still be slower than JS, but it may be useful in some circumstances.

Max Goldstein

unread,
Dec 28, 2016, 10:38:02 AM12/28/16
to Elm Discuss

Task.succeed () |> Task.andThen \_ -> do hard stuff


Sorry, that should be Task.map not Task.andThen. 

Mark Hamburg

unread,
Dec 28, 2016, 10:58:08 AM12/28/16
to elm-d...@googlegroups.com
Thinking about this further, yes, one can wrap the logic to talk to a pair of ports in a way that makes them work like commands that actually include responses. Doing so, however, requires the following departures from standard Elm practice:

1. Communicate up from update functions using lists of out messages rather than commands. We need to do this so that we can route the requests to the logic handling the ports. (In practice, I find this pattern helpful in other ways as well, but it does depart from standard Elm practice and it requires more plumbing work for each point where modules get glued together.)

2. Store (tagging) functions in the portion of the model handling the ports so that results can be tagged for delivery when they come in through the subscription port. This goes against the "don't store functions in the model" advice and makes it unsafe to use the equality operator with ones model — though I suspect that in practice it would prove to be safe.

This is essentially the process of building and using what I referred to as a pseudo-effects manager. If anyone really wants example code, I can try to get some written up. If persistent cache doesn't become available soon, I may be writing this anyway to handle the logic for talking to local storage.

But even with this "solution" in hand, we're still not in a great position to address the issue of moving performance critical code out of Elm and into JavaScript. The above design can wrap that logic in something that behaves much like a command, but those and this are awkward to thread through a computation. In comparison, when I've tried taking synchronous algorithms and breaking out pieces into tasks — e.g., to access a database — the refactoring process is actually pretty straightforward. So, having some official, documented way to create JavaScript-based tasks would still be a huge win in opening up an escape hatch for performance issues and access to other functionality.

Mark

Mark Hamburg

unread,
Dec 28, 2016, 12:57:15 PM12/28/16
to elm-d...@googlegroups.com
TL;DR: Imagine having to do all HTTP requests via a single port for posting all requests and a single port for receiving back all results. Do you want to program to that model?

What I'm concerned with is the case where one needs to run more than one instance of the long-running or otherwise external process at once possibly delivering their results to different parts of the model. For example, imagine that our model contained a list of simulations where we could set parameters via Elm but the actual computation needed to be sent off to JavaScript (or asm-js). We could start a computation via the command port and get answers back via the subscription port, but because we have more than one simulation to process, we might reasonably want to start the computation for each of those simulations and get the results back with the appropriate tagging. Maybe we do a lot with simulations and we want to have different views with different sets of simulations and we would like to put the code for dealing with the ports in one place.

From an API standpoint, I would argue that the most flexible option is a way to provide the following from JavaScript:

runSimulation : SimulationParams -> Task SimulationError SimulationValue

These are flexible because we can chain them into bigger structures (and maybe someday, though not today, now only spawn them but cancel them).

Sticking with a more command like API that is less composable but otherwise has similar semantics, one could provide:

runSimulation : (Result SimulationError SimulationValue -> msg) -> SimulationParams -> Cmd msg

We can build that using ports if we drop commands in favor of out messages — in this case an out message asking to run a simulation with particular parameters and a particular tagger function for the results. (We can build it using commands if we write an effects manager but writing those is discouraged.)

We might think that we could put the port logic in the Simulation module and create a subscription to the results port for each model currently awaiting the results of a simulation run. But this is difficult to make work in practice because all of the models will be subscribing to the same port and receiving the same messages via that port. Hence, we will need some way to globally manage the identifiers for either the runs or the models and by pushing the logic down toward the individual model instances, we have lost the opportunity to provide that sort of global ID management.

Mark

On Wed, Dec 28, 2016 at 6:10 AM, GordonBGood <gordo...@gmail.com> wrote:

If we can do all of that, I don't see what Mark is worried about?  We don't have to have an Event Manager?  What's the point of a Router/Event Manager?  Ah, I think it's to do with queue'ing messages to process later whereas this will process them immediately?

If this all works, we could write LRP's in JavaScript or optionally move them to Elm when it gets more efficient.

--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.

Josh Adams

unread,
Dec 28, 2016, 2:02:10 PM12/28/16
to Elm Discuss
I don't love the idea of furthering this thread, but felt I'd take a swing at something I think is kind of core to the Elm community (and has been communicated as such):
  • There is a limited amount of time that can be spent on Elm (this is true of all things)
  • To maximize the benefit/time ratio of work being done, it is best to have a motivating use case to identify actual use cases that are problematic.
  • Once one has been identified, writing up a short (because there's limited time, again) explanation of the real world problem that you are experiencing, coupled with an SSCCE, is the best way to communicate the problem you're having - both the motivating use case and the initial work that's gone into identifying the core problem.
Inside of this framework, I've seen extremely meaningful discussions occur. Outside of this framework, I've witnessed a lot of threads that serve to rile people up but don't meaningfully benefit the language.

I would love a do-over of this thread.  If you have a real-world problem that you can share with us, we can all work on seeing what we can do to help.  If you don't, we'll talk about micro-optimizations.

As one example, Noah mentioned that there was a real-world problem related to this, and by modeling the data in a slightly different way they were able to solve it.  If we had a similar situation perhaps a similar result could ensue.

My $0.02. (given the number of words, it seems my opinions are cheap)

Joshie

GordonBGood

unread,
Dec 28, 2016, 6:16:07 PM12/28/16
to Elm Discuss
Joshie, your words are good, but I actually got out of the thread what I wanted:

  1. I have a real world problem of performance of Elm doing specific types of problems involving intense math.
  2. I submitted here an small example showing that for any kind of tight loop that Elm produced code that can be five to ten times slower than JavaScript.
  3. I suggested that the compiler might be able to produce better code with not too extensive changes.
  4. The answer from several knowledgeable contributors is that, yes, the compiler can be improved, but that it is a huge project and not likely to happen soon.
  5. From there, several have suggested that in the interim, the only solution is to call more efficient code written in JavaScript (or of course it could be written in TypeScript or Kotlin with the JavaScript backend).
  6. Problems in doing that were raised, primarily with the difficulty in calling ports and getting results through subscriptions.
  7. A solution was suggested that shows it isn't that hard if it works.
  8. Further details on his problem were provided by Mark, with the general idea that there needs to be provided better documentation and libraries supporting farming out asynchronous work to Task's, which standardization isn't probably ready yet due to the state of the Process definition.  His further refinements to Elm are unlikely to be implemented soon.
  9. For my purpose and the reason I opened the thread, I have the best answer available right now if it works:  farm out the computational intensive work to JavaScript with the discussed method.
  10. I will test that capability and report back.
Even if this method works, the interim method means I have to produce JavaScript code which I understood that the purpose of Elm was to avoid.  If one still has to write large parts of the code in JavaScript for this particular application, one then starts to ask why not use TypeScript directly (c#'ish syntax) or Kotlin with the JavaScript back end (more concise, can be very functional in use, quite a bit more advanced stage of development than Elm).  For me, these are the implied questions this thread raises.

 

GordonBGood

unread,
Dec 28, 2016, 6:21:43 PM12/28/16
to Elm Discuss


On Thursday, 29 December 2016 06:16:07 UTC+7, GordonBGood wrote:
Even if this method works, the interim method means I have to produce JavaScript code which I understood that the purpose of Elm was to avoid.  If one still has to write large parts of the code in JavaScript for this particular application, one then starts to ask why not use TypeScript directly (c#'ish syntax) or Kotlin with the JavaScript back end (more concise, can be very functional in use, quite a bit more advanced stage of development than Elm).  For me, these are the implied questions this thread raises.

Forgot to add:  Of course it isn't an either/or decision:  One can use Elm for what it does best in structuring the User Interface and use Kotlin or some other language for what they currently do batter in producing more efficient code.

Martin DeMello

unread,
Dec 28, 2016, 6:22:22 PM12/28/16
to elm-d...@googlegroups.com
On Wed, Dec 28, 2016 at 3:16 PM, GordonBGood <gordo...@gmail.com> wrote:
Even if this method works, the interim method means I have to produce JavaScript code which I understood that the purpose of Elm was to avoid.  If one still has to write large parts of the code in JavaScript for this particular application, one then starts to ask why not use TypeScript directly (c#'ish syntax) or Kotlin with the JavaScript back end (more concise, can be very functional in use, quite a bit more advanced stage of development than Elm).  For me, these are the implied questions this thread raises.

elm certainly hasn't hit the "all things for all people" stage yet, but if you do want to use an elm-like language with easier js interop I'd recommend F# rather than typescript or kotlin.


martin

GordonBGood

unread,
Dec 28, 2016, 9:25:30 PM12/28/16
to Elm Discuss
Thanks, Martin, read and bookmarked.  Looks like I'll have to do some comparisons between Fable (F#) and Kotlin.  Kotlin has become my favourite JVM language due to its simplicity yet almost purely functional characteristics (about as much as F#), but I do like the F#/Elm syntax better.

Related to speed, it seems to me that the working Fable (F#) code linked from your link above is more responsive than the working Elm code from another link on that page, both for the same sample application; am I imagining things or do you see that too? 

Martin DeMello

unread,
Dec 28, 2016, 9:47:13 PM12/28/16
to elm-d...@googlegroups.com
On Wed, Dec 28, 2016 at 6:25 PM, GordonBGood <gordo...@gmail.com> wrote:

Related to speed, it seems to me that the working Fable (F#) code linked from your link above is more responsive than the working Elm code from another link on that page, both for the same sample application; am I imagining things or do you see that too?

yes, but it seems to be due to a slower load time from the backend rather than any perceptible frontend differences.

martin 

GordonBGood

unread,
Dec 29, 2016, 6:51:50 AM12/29/16
to Elm Discuss
Do you know anything about PureScript as compared to Fable and Elm (according to the article it sits somewhere between as far as safety goes)?  It seems that it has an extensive Haskell-like syntax and philosophy, but that may be overly complex for the types of people who would rather choose Elm.
 

art yerkes

unread,
Dec 29, 2016, 10:23:50 AM12/29/16
to Elm Discuss
I can comment a bit on purescript and fable (and ghcjs) from a n00b perspective in all three.

- Purescript's type system is not as well polished as haskell and has IMO a steeper learning curve.
- Purescript's package ecosystem is very immature.
- Purescript's js interop is very easy and doesn't fight at all.
- GHCJS speed was surprisingly fast and it has concurrency features builtin (such as calling haskell from javascript with and without a wait for a result, chans etc).
- GHCJS has the full haskell package set, but pins some emulated packages at specific versions, so some caveats apply.
- GHCJS interop allows you at the core to just define javascript to run inline, but seems to be in flux in recent versions.  I was able to use the basic interop features but couldn't find current docs on advanced ones.
- Fable's type system has everything from .net, including classes and interfaces that work as you expect.  It's basically the same type system as kotlin plus proper sum types and convenient product types.  The fable language output isn't as perfectly stable and mature as others however and the type system has a hole: some functions are invisibly curried (temporaries, non-exported functions in a module), and some invisibly javascript-style (exported ones in a module, interface methods).
- Fable's package ecosystem is basically nonexistent, and pure F# generally won't work if it relies on unemulated stuff from System.* but js interop is easy.
- Fable's interop is mature and pretty good.  It lets you just formulate javascript to have in the literal generated code, replacing a specific function call.

GordonBGood

unread,
Dec 30, 2016, 8:27:21 AM12/30/16
to Elm Discuss
Art, thanks for your input.  I do like Haskell-like syntax and programming model (something like Elm) so I haven't tried Fable yet, although F# syntax is alright.  Based on what you are saying and my trials, everything (really incluing Elm) is still in too unstable stage, although Elm is somewhat usable (although slow) as it is.

I did the following:
  1. Tried PureScript in the browser on their "try" website and found that although I could reasonably quickly get used to its changed syntax and type class model from Haskell, libraries aren't very stable in that although I could write and compile successfully a simple little Data.List.Lazy progression, it would fail silently for some program variations and blow up with a stack overflow for others; I think it is inconsistent in applying force to the lazy stream with the library functions.
  2. I tried installing GHCJS since it doesn't have a try website, and found that not smooth at all, with broken version references in the cabal file.  Eventually I got it to install with a patched version that someone had contributed, but couldn't get the ghcjs-boot to build the libraries as per the instructions because some parts weren't installed where the OS could find them (more patches, messing around required) - not stable enough if it takes all this work!
  3. I haven't tried Fable, but from what you are saying it will probably install but you are saying that the code output sometimes isn't very good, with no package subsystem.
  4. I tried Kotlin for generating JavaScript but found that although it generates code for the JVM fine, it still isn't very stable for producing JavaScript code, and only versions in development have some success at all (with tail call optimization that works for JVM not working for JavaScript).
So it's back to using Elm for the front end and TypeScript/JavaScript, which are stable, for the stuff that needs to be faster until some of these, hopefully GHCJS but PureScript would be acceptable get more stable and work consistently.  Elm compiling to more efficient code would also fill my needs.

By the dates on the latest PR commits and how much there is to do, PureScript, and Fable have the best chance of getting there reasonably quickly, with Elm also actively developed, but I think we are looking at a minimum of a year for any of them to get to stable status.

art yerkes

unread,
Dec 30, 2016, 3:25:53 PM12/30/16
to elm-d...@googlegroups.com
The promise of never needing to really get dirty and debug live Elm code is real, and it's true that nothing else quite lives up to that.

I wrote some notes on getting ghcjs up and running earlier.  The basic trick is to ignore the initial README documentation and look on the wiki, following the directions for your specific haskell version.  For 7.10:

I used ghc 7.10, which doesn’t work out of the box with ghcjs.  I cloned ghcjs master and used these instructions: https://github.com/ghcjs/ghcjs/wiki/GHCJS-with-GHC-7.10 from the wiki to install it properly.  The quick start on the ghcjs readme doesn’t quite mention everything.  In particular, you must bootstrap like:


ghcjs-boot --dev --ghcjs-boot-dev-branch master --shims-dev-branch master



--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.

Bob Zhang

unread,
Dec 30, 2016, 11:44:10 PM12/30/16
to Elm Discuss

Just for fun, I pasted the same code under BuckleScript playground:


The generated code is below

```js
function testProg(n) {
  var lmt = Pervasives.min(n + 100000000 | 0, 1000000000);
  var _i = n;
  while(true) {
    var i = _i;
    if (i >= lmt) {
      return i;
    }
    else {
      _i = i + 1 | 0;
      continue ;
      
    }
  };
}
```
Note that BuckleScript compiler is able to do type specialization for generic comparison, also it heavily optimized curried calling convention


On Monday, December 26, 2016 at 12:44:49 PM UTC-5, GordonBGood wrote:
Synopsis:  Some, including  Evan, maintain that Elm can be "faster than JavaScipt".  While that may be true for some use cases including use of perhaps more efficient UI updates due to compartmentalized use of VirtualDOM, the actaul Javascript code generated by the compiler  is not very efficient for many/most tight code cases.  The reason is often unnecessary nested function calls that could easily be eliminated by making full use of the type information that the Elm compiler has.

Part I

An example;  The following tight loop doesn't really do anything, so should therefore compile into the very tightest of code (and I'm not expecting the Elm compiler to recognize that the result is actually known at compile time):

range : Int
range = 1000000000

testProg : Int -> Int
testProg n = -- do some work
  let lmt = min (n + 100000000) range in
  let loop i =
    if i >= lmt then i else
    loop (i + 1) in loop n

which compiles to the following JavaScript:

var _user$project$Temp1482759649866537$range = 1000000000;
var _user$project$Temp1482759649866537$testProg = function (n) {
var lmt = A2(_elm_lang$core$Basics$min, n + 10000000000, _user$project$Temp1482759649866537$range);
var loop = function (i) {
loop:
while (true) {
if (_elm_lang$core$Native_Utils.cmp(i, lmt) > -1) {
return i;
} else {
var _v0 = i + 1;
i = _v0;
continue loop;
}
}
};
return loop(n);
};
All right, the code looks fairly good, and we can see that for the inner `loop` function that the compiler used its new capability to do tail call optimization and turn it into a while loop.  Also, one might expect that any decent JIT compiler such as Chrome V8 will use constant folding and get rigd of the `_v0 variable.  However, the real limitation of this loop is the call to the `Native_Utils.cmp` function.  Function calls are expensive at 10's of CPU clock cycles each!

The pertinent JavaScript for `Native_Utils.cmp` is as follows:

var LT = -1, EQ = 0, GT = 1;

function cmp(x, y)
{
if (typeof x !== 'object')
{
return x === y ? EQ : x < y ? LT : GT;
}
...

Note that there are three native branches here (in addition to the enclosing one):  one for the check to see it the arguments are objects (which of course they are not in the case of Int's as here or Float's as the compiler well knows), one to check if they are equal. and (given that generally they won't be equal most of the time) one to see which is greater.  Now these conditions are not so bad by themselves as they are very predictable for modern CPU's branch prediction (i is almost always < lmt), so that will cost at most a few CPU cycles; However, the call to the function in the first place will cost 10's of CPU clock cycles!

Given that the compiler already knows and strictly enforces that the arguments are both Int's or Float's (which are both just Numbers in JavaScript), there is no reason that it cannot directly output (i >= lmt) instead of the function call and make the whole inner loop take only a few CPU clock cycles (on a JavaScript engine such as Chrome V8).  If the compiler were consistent in applying this specialization rule, there would be no need for the `Native_Utils.cmp` function to do the check if the arguments are objects, but for safety's sake and considering that one extra check in the case of objects is likely negligible compare to the object processing, it may as well be left in for its true best use case of comparing objects of the various kinds.

The Elm Compiler only deals with two primitive types:  Int and Float (which are both actual Number/Float to JavaScript), which makes direct use of primitive operands very easy

Part II

In a similar way, the definition of the Bitwise library to emulate Haskell's definition of the Data.Bits library was silly for only five Int functions, made even worse by a name collision with the Bool `xor` operator.  Because these are library functions, there is at least one level of function call for every use.

Just as for the above function call for known primitive types (in this case only for Int), these functions should be added to the Basics library as operators with appropriate infix levels and with the names `&&&`, `|||`, `^^^` , `<<<`, and `>>>` just as for F# (which Elm emulates more and more), with the Elm compiler directly substituting the equivalent primitive JavaScript operators.  The Bitwise library, which should never have existed, should be canned.

Note that although this will make bitwise operators much faster than currently, programmers should be aware that there are many operations taking place under the covers that may not make these operations as efficient as possible alternate means:  under the covers the arguments are and'ed to produce 32-bit values, the operation is carried out, and then the result is sign extended to convert back to a Number/Float, although for a sequence of bitwise operations, the JavaScript engines may only do the conversions at the beginning and the end of the sequence.  Also, using bits in place of Bools does save space but doesn't save as much as perhaps thought:  32 bits are stored in a 64-bit Float, which may not save much space as compared to Bool's, especially as some JavaScript engines by automatically do the specialization.  In other words, potential time and space savings must be tested.  But in any case,these operations may as well be as efficient as possible and this above changes should be made.

GordonBGood

unread,
Dec 31, 2016, 7:14:24 AM12/31/16
to Elm Discuss
On Saturday, 31 December 2016 03:25:53 UTC+7, art yerkes wrote:
The promise of never needing to really get dirty and debug live Elm code is real, and it's true that nothing else quite lives up to that.

Amen to that: the advantage of being simple and placing restrictions on what one can do such that there is really only one way to do things, which can be carefully tested to not cause crashes or weird behaviour...
 
I wrote some notes on getting ghcjs up and running earlier.  The basic trick is to ignore the initial README documentation and look on the wiki, following the directions for your specific haskell version.  For 7.10:

I used ghc 7.10, which doesn’t work out of the box with ghcjs.  I cloned ghcjs master and used these instructions: https://github.com/ghcjs/ghcjs/wiki/GHCJS-with-GHC-7.10 from the wiki to install it properly.  The quick start on the ghcjs readme doesn’t quite mention everything.  In particular, you must bootstrap like:


ghcjs-boot --dev --ghcjs-boot-dev-branch master --shims-dev-branch master

Thanks for that; there isn't a lot of help getting GHCJS to work with GHC 8.0.1.

The encouraging news is that there is very active work going on for GHCJS, just not committed to the github GHCJS repository, with the latest (version 2.1) source available from  Luite (the author of GHCJS); which to me means that it may reach a stable status before the others (other than Elm).

As I am on Windows I used the instructions on how to set up MSYS2 as per the GHCJS Readme.md file to which I added $PATH addition for  node (for later use and testing) that was installed under Windows.  I then performed the following steps as per your suggestion, but get the following error:

System\Process\Common.hs:50:31: error:
    Module `System.Win32.DebugApi' does not export `PHANDLE'

GHCi\Signals.hs:9:1: error:
    Failed to load interface for `System.Posix.Signals'
    Perhaps you meant System.Posix.Internals (from base-4.9.0.0)
    Use -v to see a list of the files searched for.
cabal.exe: Error: some packages failed to install:
ghci-8.0.1 failed during the building phase. The exception was:
ExitFailure 1
process-1.4.2.0 failed during the building phase. The exception was:
ExitFailure 1

So can't use it.

GordonBGood

unread,
Dec 31, 2016, 8:35:28 AM12/31/16
to Elm Discuss
On Saturday, 31 December 2016 11:44:10 UTC+7, Bob Zhang wrote:

Just for fun, I pasted the same code under BuckleScript playground:


The generated code is below

```js
function testProg(n) {
  var lmt = Pervasives.min(n + 100000000 | 0, 1000000000);
  var _i = n;
  while(true) {
    var i = _i;
    if (i >= lmt) {
      return i;
    }
    else {
      _i = i + 1 | 0;
      continue ;
      
    }
  };
}
```
Note that BuckleScript compiler is able to do type specialization for generic comparison, also it heavily optimized curried calling convention
 
That's interesting; Ocaml is quite a good syntax; this main thing I have found wrong with it in the past is how few primitive types it has, as int Int is 32 bits on 32-bit platforms and 64-bits on 64-bit platforms; actually less then these as there are tag bits.  Going by the try site, this is still true true; this might not affect it's use for BockleScript so much as most things are going to get converted to JavaScript Number's anyway, but one might be forced to use floating types more often then really required and there still is no way to represent primitive unboxed 32/64 bit integers natively.

Bob Zhang

unread,
Dec 31, 2016, 11:09:18 AM12/31/16
to Elm Discuss
For the JS backend, `int` is always int32, we have int64 too which is simulated by using two `int`. in OCaml float is always double which is exactly the same as js number.
The cool thing is that OCaml already has a very optimized native backend, and its type system is much more expressive (on par with Haskell)

art yerkes

unread,
Dec 31, 2016, 11:39:07 AM12/31/16
to elm-d...@googlegroups.com
I also tried it with ghc-8 and it didn't seem like it was going to work, although I could have just not got the right steps.  Decided to hold off.  Haskell changes relatively quickly compared to other languages.

--

Martin DeMello

unread,
Dec 31, 2016, 2:33:57 PM12/31/16
to elm-d...@googlegroups.com
Bucklescript's codegen looks really nice. Does it have any convenient way of working with the DOM and generating HTML/SVG? I looked through the docs and it seemed focused on writing javascript libraries rather than UI code.

martin

--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.

GordonBGood

unread,
Dec 31, 2016, 10:09:28 PM12/31/16
to Elm Discuss
I see that BuckleScript would work fine for JavaScript output and OCaml can be fast, but wouldn't int64 with two int's be a bit slow?  It's just that i prefer Haskell syntax and capabilities more than OCaml as it just feels like a more modern language.  I do like F# (and thus probabably Fable), but it isn't as pure a language as Haskell.  I think I'll see what GHCJS can do for me, once I can get it installed.

GordonBGood

unread,
Dec 31, 2016, 10:52:43 PM12/31/16
to Elm Discuss
I think that the author and supporters may not have tested the latest version 2.1 on Windows and the problem my be due to differences between using *nx pthreads and pwinthread for multi-processing/multi-threading.  Would really like to try it, but will wait until they sort this.  Have logged an issue against the ghc-8.0 branch.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.
Message has been deleted

Peter Damoc

unread,
Jan 1, 2017, 7:57:47 AM1/1/17
to Elm Discuss
On Sun, Jan 1, 2017 at 1:41 PM, GordonBGood <gordo...@gmail.com> wrote:
That leaves me impressed with the ease-of-use for Elm, with my biggest wish list all concerned with making Elm faster; but it is still early days for the language as long as it is still evolving.

Elm is fast enough for most purposes and it will get faster (e.g. there is a new implementation for Array in the works that I understood to be faster) 
If you need a piece of code to be faster, you also have the option of designing it as a driver for the runtime and just use the Native layer to access it.
This is an advanced use of Elm and a last resort, you have this freedom if this is what would push the project over the "fast enough for practical purposes"

GordonBGood

unread,
Jan 1, 2017, 9:48:09 AM1/1/17
to Elm Discuss
On Wednesday, 28 December 2016 21:23:19 UTC+7, Peter Damoc wrote:


On Wed, Dec 28, 2016 at 3:10 PM, GordonBGood <gordo...@gmail.com> wrote:

So you are saying we can already do the following:

type Msg = ProgProgress Progress | ProgCancelled | ProgDone Answer | Tick

port startLRP : () -> Cmd msg

port cancelLRP : () -> Cmd msg

subscriptions model =
  Subs.batch
    [ lrpProgress ProgProgress
    , lrpCancelled ProgCancelled
    , lrpDone ProgDone
    , Timer.every 1000 Tick ]

port lrpProgress : (Progress -> Msg) -> Sub Msg

port lrpCancelled : (() -> Msg) -> Sub Msg

port lrpDone : (Answer -> Msg) -> Sub Msg

with the Native JavaScript as per the link for each of the ports, Types all defined, update function handling all Msg cases, and the subscription system will automatically handle getting each subscription to the right Msg?

Will this not cause problems with the Timer.every subscription?


This looks like valid code to me. 
I would implement it differently, in a way where the model captures the current state of the process and the subscription uses that state to listen only to the relevant Subs but... that's more of an optimization.

I would implement it differently too, but this was just for a quick example.
 
Also, I assume that Answer and Progress are aliases to types that can travel the ports.

Of course, didn't want to pin them to specify types whether they be Int, Float, Record, Union, Tagged Union, Array, or whatever, is immaterial.

What kind of problems do you foresee with Timer.every? 

That's why I added it as I was concerned that using some Sub's with event managers and some without would somehow mix up the system.  I've now tested this (got sidetracked by looking for something good to write the fast bits that isn't JavaScript or even TypeScript as you can see in the latter part of the thread), and it works exactly as advertised, so no worries.  I guess my take away on Event/effect (defined in `effect` modules) managers is that they are required when there is a possibility that the same Sub would be used for more than one Msg, possibly of different kinds; as long as one can write code, as here or even a little more complex as you are suggesting, then they are not required, and as they only are fired for the specific Subs for which they are defined but work behind=the=scenes, then their application doesn't need to be concerned their implimentation.

That leaves me impressed with the ease-of-use for Elm, with my biggest wish list items are all concerned with making Elm faster as is the subject of the thread; but it is still early days for optimization of the language as long as it is still evolving.

As raised in the opening post, the main concern is nested function calls as is a common bottleneck with functional languages, which is usually addressed with inlining functions where possible and specializations and rules defining when this can be used, also using compiler magic to not use functions at all when not necessary (as has been already done for Records, and commonly used Tuples, which is why these are not created with library functions).  I note that the current 0l18 compiler already does quite a lot in inlining of native functions/operators (except for comparison, which is the optenint post's concern)

A more minor issue is the way immutability has been implemented in the existing libraries, especially the Array library:  So much attention has been paid to making the `set` function have a better big O performance that it has built up a considerably large constant overhead in all Array operations so that these are too slow to use for anything serious.  A better approach is the Haskell one for immutable arrays where they would be standard JSArrays (in this context) treated immutably without an equivalent to the`set` function at all and where all transmuting of arrays is handled by transmuting functions working on the whole array.  In Haskell the transmuting functions are based on (relatively efficient) lazy lists which thus avoid excessive memory use, but I am suggesting that in Elm they would be based on passed-in functions where the new temporarily mutable array is an argument whose type cannot be created in Elm code but can only exist when passed into the context of these transmuting functions as an argument; then, in order to get speed, one would not work by setting array elements individually, but would try to minimize the creation of many new immutable arrays by the function that was passed into the creation function in the first place.  However, as library concerns, this can easily;be fixed later, and if necessary in the interim, it wouldn't be too hard to create the required libraries.

The only reason that I suggest these things is that I agree with Evan that JavaScript would ideally only be used as necessary in libraries, with the majority of applications never having to deal with it; however, with the current version there seems to be various applications where Elm code is not performant enough.

GordonBGood

unread,
Jan 1, 2017, 10:19:02 AM1/1/17
to Elm Discuss
I hit the post button accidentally so you only got a part of what I set out to say :)

As I said earlier in the thread, current 0.18 Elm is not "fast enough for practical purposes" for applications as in games and math applications.  For a simple instance, the community contributed math library implementation of "primesBelow" (
Sieve of Eratosthens) is horrendously slow due to the way it uses the current Array library such that it is about 30 times slower than doing it functionally where it would usually be about 25 times faster;  I rewrote it to get speeds only about three times slower than functional approaches, but that still isn't so good.  A Haskell type of implementation, while still appearing as immutable to Elm code, would get most of using arrays speed back.  Another application would be things like doing extended precision math.  Currently the available library is just a wrapper around a raw JavaScript implementations; however, with the library as I would propose it (and with Elm generating more efficient comparison code), one would be able to implement this and other similar types of libraries entirely in Elm without much of a cost in speed.  There must be hundreds of similar applications in the math and games domains where currently writers would have no choice but to interface with raw JavaScript or at least generate JavaScript with another more efficient compiler as in GHCJS, etc.

I think that the software development world is moving away from developing JavaScript code just as it moved away from writing assembly language long ago, which is why there is so much interest in all these transpilers, asm.js, wasm, etc.  It seems to me that the reason Elm has garnered so much interest for such a young language is for that very reason as well as that it can (potentially) handle both the front end and back end without external code except that it isn't yet fast enough for these domains.  If it were (quickly) improved, I think it might become the Python (as to popularity) of the web page and browser application world.

Bob Zhang

unread,
Jan 1, 2017, 10:27:40 AM1/1/17
to elm-d...@googlegroups.com
Note that a high quality optimizing compiler like BuckleScript outperforms native JS significantly > Elm.
(See the benchmark here: http://bloomberg.github.io/bucklescript/slides/preview/bb-reveal-example-presentation.html#/5/1)


--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Regards
-- Hongbo Zhang

Lourens Rolograaf

unread,
Jan 1, 2017, 12:19:31 PM1/1/17
to Elm Discuss
Dear grumpy old man, please stop complaining. We understand you look for speed. Elm is fast enough for standard webapps and will be in more mature versions for graphical webapps.
All first attention is on the mayor benefits of using this reliable Javascript alternative. (you can read them for yourself here http://elm-lang.org/ )
Should you have complaints about runtime errors, unreliability, difficult refactoring, unreliable versioning of packages, or other stability complaints about your webapp made in elm, everybody would like to hear in this discussion group.

thank you
Lawrence


Op zondag 1 januari 2017 16:27:40 UTC+1 schreef Bob Zhang:
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Richard Feldman

unread,
Jan 1, 2017, 2:35:41 PM1/1/17
to Elm Discuss
Lawrence, please keep it civil.

GordonBGood

unread,
Jan 1, 2017, 11:04:10 PM1/1/17
to Elm Discuss
On Sunday, 1 January 2017 22:27:40 UTC+7, Bob Zhang wrote:
Note that a high quality optimizing compiler like BuckleScript outperforms native JS significantly > Elm.
(See the benchmark here: http://bloomberg.github.io/bucklescript/slides/preview/bb-reveal-example-presentation.html#/5/1)

The only reason that I suggest these things is that I agree with Evan that JavaScript would ideally only be used as necessary in libraries, with the majority of applications never having to deal with it; however, with the current version there seems to be various applications where Elm code is not performant enough.

Bob/Hongbo, I don't argue with you (and you have clearly demonstrated) that BuckleScript (and the OCaml front end) are fast.  The reasons I don't care for OCaml particularly for general use is the paucity of primitive data types (as discussed) and limitations of its type system in general as compared to more modern ML type languages such as F# (which syntax and use I quite like) and Haskell, and the feeling I get using the OCaml syntax that it is a prototype of a modern language but not quite there.  With clearly long familiarity, you are used to overcoming OCaml's "warts", but for others these are real obstructions to the common acceptance of the language.

You have replied that for BuckleScript's use with JavaScript as output also having limited types, the limitations of primitive types has been overcome, and I can see how it could as long as one is using a 64-bit OCaml compiler so that Int's are large enough to fully express a 32-bit range (remember those OCaml tag bits).  For this limited use of output to Javascript, many of Ocaml's "warts" aren't a problem.

My latter complaint is more apt I believe:
  1. Come now, tag bits like a dynamically typed Lisp-type language for a statically typed language?  This really harks back to List-type dynamicall typed languages.
  2. Requirement for semicolons as an end of block (or even dual semicolons such as global imports?) at certain key places, although granted those are mostly for imperative forms of code with  `do` thatwe would prefer not to use (while/for)?
  3. In spite of being white space delimited, also requiring that sub blocks be delimited by begin..end or brackets?
  4. I'm sure there are others as in the definition of the type system...
However, your implementation is most usable and valuable as for this use you seem to have overcome most of the limitations of OCaml, and for many small uses your provision of a the online compiler in the "try in browser" page is so useful that one doesn't have to go to the work of downloading a good IDE (OCaml-Top or Visual Studio Code plus plugin) plus the node bucketscript plugin.  Thank you very much for bringing it up, as it appears that your implementation is ahead of the alternate JavaScript Transpilers as to stability and immediate usability, perhaps partially due to the stability of OCaml.  It is impresive that a fairly young project produces such incredibly efficient JavaScript code!  It seems that your project is also well supported by a corporation, so it isn't going to disappear as so many other one-horse-pony's do over the course of time.

As developer of the program, you can be most proud.  Your project is both practical and eminently usable.

Bob Zhang

unread,
Jan 2, 2017, 1:14:57 PM1/2/17
to elm-d...@googlegroups.com
Hi Gorden,
   Thanks for your lengthy reply.
   I didn't try to convince you that OCaml is better than Haskell, they are different styles : ). It just feel a little weird that "when you want performance, you should switch from a statically typed language(elm here) into a dynamically typed language(js)", a decent compiler should produce much more efficient js output than hand written js, note that I have been working on BuckleScript compiler for only one year, there are still many low hanging fruits there and you can expect even better performance in the near future.
  Some minor corrections to your comment
  - I am quite familiar with Haskell, OCaml, and F# (did 3 years research in PL, I learned F# first, Haskell later and OCaml as the last one), the expressivity of type system in my opinion  follow as below:
  Haskell ~ OCaml > F# >> Elm
  (OCaml has a fairly advanced object system with structural typing and row polymorphism which is incredibly useful to build FFI to JS objects)
  - If syntax matters a lot, you may be interested in ReasonML, Facebook are working on a new syntax for OCaml and it works seamlessly with BuckleScript, some core ReactJS developers are also working on high quality ReactJS bindings
  Happy New York and enjoy your favorite language!

--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

GordonBGood

unread,
Jan 2, 2017, 4:31:07 PM1/2/17
to Elm Discuss
Buckle script can be used to link to Elm without writing any JavaScript to speak of.  Say in Elm one wants to call some JavaScript routine with an integer via a port `callOut` and will receive an integer response via another port `callIn` as a subscription, then the following code in BuckleScript will do the hook-up:

let app = [%bs.raw "Elm.Main.fullscreen()"]  (* get_app() *)

external rcv_callOut : (int -> unit) -> unit = "" [@@bs.val "app.ports.callOut.subscribe"]

external snd_callIn : int -> unit = "" [@@bs.val "app.ports.callIn.send"]

let () =
  rcv_callOut (fun x -> snd_callIn (x + 1));
  etc. ...

with the following JavaScript produced:

app.ports.callOut.subscribe(function (x) {
      app.ports.callIn.send(x + 1 | 0);
      return /* () */0;
    });

Obviously, there is a lot of overhead happening in the background through the port interface so one wouldn't do this for such a trivial task as an increment of the integer, but this just shows the interface.  Typical use would be some kind of long running routine that needs to be faster than what current Elm can produce, and may include progress reports and checks for cancellation.  Normally in JavaScript, there would be no return on the inner function, but in OCaml every function has a return value even it is a unit return, for which it returns zero as here; I don't know any way to prevent this from happening but it doesn't cause a problem as JavaScript will just ignore it the returned value.

GordonBGood

unread,
Jan 2, 2017, 5:38:14 PM1/2/17
to Elm Discuss
On Tuesday, 3 January 2017 01:14:57 UTC+7, Bob Zhang wrote:
Hi Gordon,
   Thanks for your lengthy reply.
   I didn't try to convince you that OCaml is better than Haskell, they are different styles : ). It just feel a little weird that "when you want performance, you should switch from a statically typed language(elm here) into a dynamically typed language(js)", a decent compiler should produce much more efficient js output than hand written js, note that I have been working on BuckleScript compiler for only one year, there are still many low hanging fruits there and you can expect even better performance in the near future.

Oh, you don't need to convince me about statically typed languages as I dislike dynamic typing except (perhaps) for using in "quick-and-dirty" applications using languages such as Python.  The only way I would use JavaScript would be to get its native speed, which is why I am looking for an alternative, and would much prefer that Elm would do it itself (reason for the thread), but it appears that will not be the case for quite some time.

Its interesting that you feel you can get yet more performance out of BuckleScript.
 
  Some minor corrections to your comment
  - I am quite familiar with Haskell, OCaml, and F# (did 3 years research in PL, I learned F# first, Haskell later and OCaml as the last one), the expressivity of type system in my opinion  follow as below:
  Haskell ~ OCaml > F# >> Elm

Yes Bob/Hongbo, my learning progression was the same as yours:  F# was first as an introduction to functional programming, then to see what the source of all the furor was about I learned Haskell (still learning the more advanced aspects), then heard good things about OCaml and knowing that F# syntax was somewhat derived from it, used it a little too but am nowhere near the stage of learning as with the other two.  I just recently discovered Elm and decided to investigate whether it was at a stable enough stage to be able to use.
 
  (OCaml has a fairly advanced object system with structural typing and row polymorphism which is incredibly useful to build FFI to JS objects)

Yes, I can see that OCaml's features, just if I were looked for maximum flexibility of type system I would prefer Haskell, but it isn't really a big concern for the kinds of things I would use Elm for, which system is adequate.  This is why I could accept Fable ever BucketScript if it were as well developed as to ease of use and speed and stable even though not as expressive just because I know it better and like its relative simplicity.

  - If syntax matters a lot, you may be interested in ReasonML, Facebook are working on a new syntax for OCaml and it works seamlessly with BuckleScript, some core ReactJS developers are also working on high quality ReactJS bindings
 
I have heard of ReasonML and the others, but don't really want to learn another language; Elm was easy but that is what has turned me away from PureScript as well as PS not being as stable nor currently producing nearly as fast code.  My intentions are pretty clear:  find an implementation of a language I already know and like to compile to fast JavaScript for the critical code only until Elm improves.
 
  Happy New York and enjoy your favorite language!

Happy New Year to you, and I'm still looking for my favourite language ;) 

OvermindDL1

unread,
Jan 3, 2017, 4:45:14 PM1/3/17
to Elm Discuss
On Saturday, December 31, 2016 at 8:09:28 PM UTC-7, GordonBGood wrote:
I see that BuckleScript would work fine for JavaScript output and OCaml can be fast, but wouldn't int64 with two int's be a bit slow?  It's just that i prefer Haskell syntax and capabilities more than OCaml as it just feels like a more modern language.  I do like F# (and thus probabably Fable), but it isn't as pure a language as Haskell.  I think I'll see what GHCJS can do for me, once I can get it installed.

Unless you know any other way of representing int64 on javascript?  An array of two integers is about the best you can do.  Using a native javascript integer you get 32-bit.  Using a native javascript number you get a 64-bit float (53-bits if I recall correctly of usable integer).

Also, OCaml and Haskell are about the same age, although OCaml is based on the older SML, but as for the 'feel' of it there are two things to note:

1.  OCaml's language is designed for fast parsing, like the code that Bob Zhang gave above compiles on 0.015 seconds on my machine here.  Even very complex programs compile in seconds at most, compared to my 'usual' Haskell programs taking multiple minutes (or potentially hours on more complex programs that use a lot of HKT's).  But near every decision of OCaml's syntax was designed to make for a *very* fast compiler (and elm was modeled as a mix of OCaml and Haskell syntax, see https://github.com/OvermindDL1/bucklescript-testing/blob/master/src/main_counter.ml as a working Elm example in OCaml).
2.  There is a PPX (preprocessor 'essentially', but of the AST) called ReasonML that is OCaml with a fluffed up, more javascript'y (ew) syntax that many like if you want something more modern feeling, but it is still just OCaml.


/me uses OCaml as their main static functional language on personal time


What I am most curious in is being able to make self-contained custom elements in some functional language to javascript with ease.  'Almost' there with elm with that (can make elm into a custom-element, but cannot use a custom-element from within elm very well yet).  Elm's subscriptions and registration approach would make making custom-elements (webcomponents) such a breeze once fixed up!  ^.^

GordonBGood

unread,
Jan 4, 2017, 12:03:34 AM1/4/17
to Elm Discuss
On Wednesday, 4 January 2017 04:45:14 UTC+7, OvermindDL1 wrote:
On Saturday, December 31, 2016 at 8:09:28 PM UTC-7, GordonBGood wrote:
I see that BuckleScript would work fine for JavaScript output and OCaml can be fast, but wouldn't int64 with two int's be a bit slow?  It's just that i prefer Haskell syntax and capabilities more than OCaml as it just feels like a more modern language.  I do like F# (and thus probabably Fable), but it isn't as pure a language as Haskell.  I think I'll see what GHCJS can do for me, once I can get it installed.

Unless you know any other way of representing int64 on javascript?  An array of two integers is about the best you can do.  Using a native javascript integer you get 32-bit.  Using a native javascript number you get a 64-bit float (53-bits if I recall correctly of usable integer).

No, that it the only way I can see it; just a limitation of JavaScript.  And you remember correctly, IEEE 64-bit floats have 53 bits including sign bit in the mantissa. 

Also, OCaml and Haskell are about the same age, although OCaml is based on the older SML, but as for the 'feel' of it there are two things to note:

1.  OCaml's language is designed for fast parsing, like the code that Bob Zhang gave above compiles on 0.015 seconds on my machine here.  Even very complex programs compile in seconds at most, compared to my 'usual' Haskell programs taking multiple minutes (or potentially hours on more complex programs that use a lot of HKT's).  But near every decision of OCaml's syntax was designed to make for a *very* fast compiler (and elm was modeled as a mix of OCaml and Haskell syntax, see https://github.com/OvermindDL1/bucklescript-testing/blob/master/src/main_counter.ml as a working Elm example in OCaml).

Indeed, by every compilation speed benchmark I have seen, OCaml beat the pack, sometimes by a lot.  That is good for my use as a BucketScript front end so it doesn't bog down trial and error development.  I have no complaints about Elm compilation speed or syntax; it feels more modern than OCaml, which is a good thing.
 
2.  There is a PPX (preprocessor 'essentially', but of the AST) called ReasonML that is OCaml with a fluffed up, more javascript'y (ew) syntax that many like if you want something more modern feeling, but it is still just OCaml.

Bob mentioned it, I've had a look at it and like the concept, but am not sure how to install it on my system (Windows 10 using Visual Studio Code as an IDE for both Elm and BuckleScript). 

Bob Zhang

unread,
Jan 4, 2017, 12:21:30 AM1/4/17
to Elm Discuss
Indeed, compilation speed is something I am most proud of . It is not just 10 or 20 percent faster, it's one or two magnitudes difference : )
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.

GordonBGood

unread,
Jan 4, 2017, 1:18:00 AM1/4/17
to Elm Discuss
Hi Bob/Hongbo,

I've run into my first speed problem with BuckleScript:  When it decides it needs to form a closure of captured free binding(s), it creates a "function returning a function" with the outer function's arguments being the current state of the "pure" captured bindings, and the inner returned function the actual closure code.  When this happens anywhere near an inner loop, the code gets very slow, although sometimes the compiler is smart enough to infer that the closure is not necessary is when the binding value is an argument of an enclosing function; however, this does not happen for local `let` bindings even when they are in the same scope as the callee/caller sites.

For example, the following bit-packed Sieve of Eratosthenes sieves over a range and returns the bit packed array of found primes over that range (odds-only.  It runs about five times slower than using imperative code for/while loops (top of a quarter million).  Code is as follows:

let soe_loop top =
 
if top < 3 then Array.make 0 0 else
  let ndxlmt
= (top - 3) lsr 1 in
  let cmpsts
= Array.make ((ndxlmt lsr 5) + 1) 0 in
 
for loop = 1 to 1000 do (* do it many times for timing *)
    let pir
= ref 0 in
   
while !pir <= ndxlmt do
      let pi
= !pir in
      let p
= pi + pi + 3 in
      let rec nxtc ci
=
     
if ci > ndxlmt then () else
      let w
= ci lsr 5 in
      cmpsts
.(w) <- cmpsts.(w) lor (1 lsl (ci land 31));
      nxtc
(ci + p) in
      let si
= (p * p - 3) lsr 1 in
     
if si > ndxlmt then pir := ndxlmt + 1 else (
       
if cmpsts.(pi lsr 5) land (1 lsl (pi land 31)) == 0 then
          nxtc si
  cmpsts

where the outer `pi` loops are imperative and the inner `nxtc` loop is functional; this latter becomes the closure that captures the `p` value as you can see in the following generated JavaScript:

function soe_loop(top) {
 
if (top < 3) {
   
return Caml_array.caml_make_vect(0, 0);
 
}
 
else {
   
var ndxlmt = ((top - 3 | 0) >>> 1);
   
var cmpsts = Caml_array.caml_make_vect((ndxlmt >>> 5) + 1 | 0, 0);
   
for(var loop = 1; loop <= 1000; ++loop){
     
var pir = 0;
     
while(pir <= ndxlmt) {
       
var pi = pir;
       
var p = (pi + pi | 0) + 3 | 0;
       
var nxtc = (function(p){
       
return function nxtc(_ci) {
         
while(true) {
           
var ci = _ci;
           
if (ci > ndxlmt) {
             
return /* () */0;
           
}
           
else {
             
var w = (ci >>> 5);
              cmpsts
[w] = cmpsts[w] | (1 << (ci & 31));
              _ci
= ci + p | 0;
             
continue ;
             
           
}
         
};
       
}
       
}(p));
       
var si = ((Caml_int32.imul(p, p) - 3 | 0) >>> 1);
       
if (si > ndxlmt) {
          pir
= ndxlmt + 1 | 0;
       
}
       
else {
         
if (!(cmpsts[(pi >>> 5)] & (1 << (pi & 31)))) {
            nxtc
(si);
         
}
          pir
= pi + 1 | 0;
       
}
     
};
   
}
   
return cmpsts;
 
}
}

As the compiler knows that the `p` value is pure, it should know that it can reference it without danger of it being in an unknown state, so there is no need for a cloaure, and if `nxtc` is not a closure then it can be inlined as usual.

The same thing happens currently when all loops are functional.

The back up work around is to use imperative code, but that's ugly.  I wonder if you have any suggestions to avoid the closure?

Regards, Gordon 
 

Bob Zhang

unread,
Jan 4, 2017, 9:05:58 AM1/4/17
to Elm Discuss
Hi Gordon,
  As you can see, BuckleScript already did a very good good job here, of course, there are still places for improvement. To write extremely high performance code, you should avoid creating a closure in a tight loop, here you can lift the `nxtc` into the toplevel, in the future, we will do this inside the compiler. Lets take further discussions off the list -- Hongbo

--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

GordonBGood

unread,
Jan 5, 2017, 12:32:34 AM1/5/17
to Elm Discuss
On Wednesday, 4 January 2017 21:05:58 UTC+7, Bob Zhang wrote:
Hi Gordon,
  As you can see, BuckleScript already did a very good good job here, of course, there are still places for improvement. To write extremely high performance code, you should avoid creating a closure in a tight loop, here you can lift the `nxtc` into the top level, in the future, we will do this inside the compiler. Lets take further discussions off the list -- Hongbo

Hi Hongbo, yes, BuckleScript did a very good job other than 1) mistakenly determining that the closure was necessary, then 2) not automatically lifting the closure, and then 3) not inlining the resulting (non-closure) function call away (see Haskell join points optimizations).  I'm just sayin' ;)  In case you needed adding to your TODO list...

In fact, BuckleScript already does catch most cases of these, and didn't in this one case because the enclosing loop has been tail call optimized (manually in case of the code I showed) so become a "ugly imperative code" loop internally, and the compiler then treats the binding `p` derived from an impure loop variable as impure and thus needing closure treatment, which seems to cause the resulting function to be invalidated as a candidate for inlining.  The compiler needs to see that the the binding `p` is invariant (in fact guaranteed as the compiler injected a let on the loop variable) during the whole creation and execution of the `nxtc` function, therefore doesn't need the closure and thus can be inlinee.  (BTW, it seems that BuckleScript ignores the [@@inline] and [@inlined] OCaml extensions?)

If  the `nxtc` function is lifted to the top level, then it needs to be written so that the otherwise free bindings are arguments (`p` and `cmpsts` in this case) and thus it is no longer a closure, and even if there were an inner worker closure, BuckleScript already inlines and tail calls when captured bindings are arguments or derived from arguments of a function call, recognizing that they are pure.  There will still be a function call to the lifted `nxtc` where there wouldn't be if it were inlined however, which means that inlining would be better, although the function call has negligible cost relative to the rest of the work in this case.

The only reason this was noticeable here is that it turns out that the time cost is not so much in the creation of the closure but much more the cost of binding a function (a closure in this case) to a variable.  It's interesting how incredibly slow JavaScript/V8 is at creating a function binding rather than just calling one - I calculate 10's of thousands of CPU clock cycles; it must be dropping back to evaluating the expression each loop rather than JIT compiling it plus the expected time of creating a function object on the heap.  This isn't BuckleScript's problem, just that BuckleScript can help avoid it for the unwary.

Stepping back, I don't have much to complain about, as before I met BuckleScript I would have been writing TypeScript/JavaScript mostly imperatively and BuckleScript/OCaml offers the option to do the same, if necessary for extremely time critical code!  I've just gotten used to avoiding that paradigm.

Yes, let's move the discussion somewhere else as it isn't relevant here. I just want to say thanks for the heads-up on BuckleScript, which I am quite enjoying in spite of its OCaml syntax foibles.  Now I can develop both Elm and BuckleScript inside similar development environments using the Visual Studio Code plugins. - Gordon
  

              _ci
= ci + p | 0<span style="color:#660" class="m_-2806577415036512984styled-by-p

Bob Zhang

unread,
Jan 5, 2017, 9:46:26 AM1/5/17
to Elm Discuss
Indeed,  in OCaml native backend, `for loop` still dominates the performance critical code since most optimizations does not work across function boundaries.
   There is still a long way for optimizing compiler to catch up with carefully tuned code, but BuckleScript does not get in your way, you can still write low level code with type safe guarantee  in it when performance matters

--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

GordonBGood

unread,
Jan 11, 2017, 10:37:27 AM1/11/17
to Elm Discuss

On Thursday, 5 January 2017 21:46:26 UTC+7, Bob Zhang wrote:
Indeed,  in OCaml native backend, `for loop` still dominates the performance critical code since most optimizations does not work across function boundaries.
   There is still a long way for optimizing compiler to catch up with carefully tuned code, but BuckleScript does not get in your way, you can still write low level code with type safe guarantee  in it when performance matters
 
To anyone that might try to get something useful out of this thread, after a week of trying BuckleScript, PureScript, and Fable (I couldn't get GHCJS to install on my machine), I have the following observations:
  1. As Hongbo asserts, BuckleScript is the fastest of the ones I tried, both as to run speed and compilation speed.  It also doesn't get (much) in the way of writing imperative code for the very fastest output, nor does it (much) obstruct the forced generation of pretty much any JavaScript special functions one might want to call from BuckleScript or Elm.  My main objections to it is that it gets very wordy and obscure when one has to force it to generate uncurried calls to special JavaScript functions, and I have grown to hate OCaml syntax even more than I ever did as feeling very archaic compared to more modern feeling functional languages as F#, Haskell, and Elm:  not a white space language, no operator overloading other than through modules/"Functors" which have a run-time cost - quite usable but that doesn't mean I like it.
  2. Fable is a great concept using F# (one of my favourite languages) as the front end and outputting JavaScript through using Babel; however, it isn't (at least yet) mature enough to consider:  it is slow to compile (likely due to adding passes in compiling to F# AST, then converting to Babel AST, then finally compiling to JavaScript) and in many cases to run, this last much due to poor emulations of F# libraries but also generally poor code generation.  It doesn't have tail call optimizations at all although there is a plug-in that looks that it would do the job and as for BuckleScript/OCaml, one can use imperative code if one must.  One can output (emit) any JavaScript one might desire for calling from Elm or Fable, and such code can have type signatures attached for (some) type safety.  Other than speed, my main problem with it is difficulty in controlling when functions are curried or not, which is a problem for temporary functions used functional style.  This pretty much invalidates writing functional code when one has to drop to imperative code frequently in order to get performance.  If only one could decorate functions to show whether they are to be called non-curried as one can in BuckleScript ("[@bs]").
  3. PureScript in a very powerful language comparable to Haskell, but being Haskell-like, there is no provision for writing imperative code at all, meaning that one would need to write Javascript in order to accomplish this.  That pretty much precludes PureScript's use for me, as the things I need it for involve speed and its more work writing JavaScript modules in some other language and then calling them from PureScript (which is quite easy) that I may as well do it more simply from Elm.  There also that PureScript depends on many library functions do be able to have its Haskell-like complexity, so anything done in the language idiom is going to be quite bulky; also, calling those features have a run-time cost so are slow.
  4. Although I could not try it, I suspect that GHCJS will also depend on many library calls in order to emulate what GHC/Haskell can do.  I did not investigate whether one can emit JavaScript directly from the language and how easy it is to call JavaScript from the language, nor do I knkow how compilation speed compares.
BuckleScript can do basic transformation of tail calls inside (some) functions into loops, but not in all cases; however, within about a year that capability won't be vary important as all mainline browsers and nodejs will do this themselves once the ECMAScript specification is formalized (currently in flux)  Other optimizations such as not making unnecessary function calls (which lack of in Elm started this thread) or creating too many objects will become the more important optimizations.

My general conclusion is that while one could use BuckleScript for the purpose of generating the fast JavaScript that Elm currently cannot, one has to be pretty committed to its use and is going to have to some imperative code in a language where writing imperative code what it was designed to do.  For some uses, one could do the same in Fable but it generally is very slow at anything functional (I measured up to about six times slower than BuckleScript due to overuse of JavaScript objects.  I am starting to think that one may as well write in TypeScript in order to avoid having to know JavaScript too well (using TypeScript classes and interfaces is an option but not compulsory and have a slight run time cost), which was my first thought.  However, I do admit that BuckleScript can do the job if one is willing to live with OCaml syntax and typing.

Bob Zhang

unread,
Jan 11, 2017, 11:20:24 AM1/11/17
to Elm Discuss
Hi Gordon, thanks for your summary.
    Just want to add that BuckleScript compiler is only developed for one year, now I almost work full time on it (thanks to my employer), so you would expect more performance boost coming soon.
    Personally, I don't  mind any syntax, I am a huge fan of common lisp so you can tell. But I understand syntax does matter to quite a lot of people, so you may be interested in checking out ReasonML by Facebook(a more familiar syntax for OCaml). OCaml is a very modular compiler which has 7 IRs,  ReasoML compiles to IR 0, while BuckleScript takes it from IR 4, so the combination of ReasonML and BuckleScript is seamless.
   Thanks -- Hongbo

--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

GordonBGood

unread,
Jan 12, 2017, 12:43:11 AM1/12/17
to Elm Discuss
On Wednesday, 11 January 2017 23:20:24 UTC+7, Bob Zhang wrote:
Hi Gordon, thanks for your summary.
    Just want to add that BuckleScript compiler is only developed for one year, now I almost work full time on it (thanks to my employer), so you would expect more performance boost coming soon.

Hi Hongbo, That you've gotten this far in only a year is incredible; the project should be amazing after another year!
 
    Personally, I don't  mind any syntax, I am a huge fan of common lisp so you can tell. But I understand syntax does matter to quite a lot of people, so you may be interested in checking out ReasonML by Facebook(a more familiar syntax for OCaml). OCaml is a very modular compiler which has 7 IRs,  ReasoML compiles to IR 0, while BuckleScript takes it from IR 4, so the combination of ReasonML and BuckleScript is seamless.

 I used Scheme for a year but stopped because I grew to dislike dynamic typing even more intensely than I did before for anything other than quick one-off projects; I got used to block brackets, but that is not to say I ever liked them!

On your advice, I've reread the ReasonML documents as I got mislead by looking at ReasonML/Rebel and that is at too early a stage for me to consider (also support for BuckleScript is currently broken).  I liked the ReasonML syntax much better than OCaml as it is more consistent, although it still has curly brackets and semicolons instead of white space delimited block, I can live with that as long as I have before as long as it is consistent.  Unfortunately, I can't get the ReasonProject to install on my machine (Windows 10 64-bit) and I am not likely to pursue it at this time as it isn't that important to me.

I understand that if I were able to install it, by "working seamlessly with BuckleScript", you mean that the "bsb" command will just take the output of the previous ReasonML build as its input to produce JavaScript?

BTW, I see one of the reasons that BuckleScript produces JavaScript that is so much faster than that produced by Elm:  you use JavaScript Array's as the base for almost all of the data structures whereas Elm uses tagged objects, which are very slow (at least on Google Chrome V9/nodejs); it almost seems like the Elm compiler just outputs dynamically typed code with the "all data is just one type - an object with tags" model, although it does recognize that numbers, characters, and strings will be picked up by JavaScript and don't need to be wrapped in a tagged wrapper object.

Its too bad you can't apply your compiler optimizations to Elm.  Perhaps they could be, as Elm does produce an AST and is a very simple language so presumably the AST is fairly simple too.

As an aside, have you ever looked at using an asm.js back end as current mainline browsers mostly support it, and if so did it make any different to speed?

Regards, Gordon

OvermindDL1

unread,
Jan 12, 2017, 10:42:05 AM1/12/17
to Elm Discuss
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:
On your advice, I've reread the ReasonML documents as I got mislead by looking at ReasonML/Rebel and that is at too early a stage for me to consider (also support for BuckleScript is currently broken).  I liked the ReasonML syntax much better than OCaml as it is more consistent, although it still has curly brackets and semicolons instead of white space delimited block, I can live with that as long as I have before as long as it is consistent.  Unfortunately, I can't get the ReasonProject to install on my machine (Windows 10 64-bit) and I am not likely to pursue it at this time as it isn't that important to me.

I'm actually not a fan of the Reason syntax myself, I consider it horribly verbose and noisy, but then again I've been comfortable with OCaml/SML syntax for over a decade now.  However correct, Reason has no Windows support 'yet' (it is on their roadmap), however bucklescript works perfectly with its HEAD right now elsewhere (most of the windows issues are, oddly enough, because of npm stupidity actually).  Bucklescript itself supports windows absolutely perfectly though (I was noisy about it at first when it was not  ^.^).

However, I can see how you'd like the Reason syntax better, but do not discount the OCaml syntax.  The nice thing about the OCaml raw syntax is it is not only blazing fast for the compiler to parse, but also for a human to parse as well, you know what things will be once you learn it, and it is very simple.

 
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:
I understand that if I were able to install it, by "working seamlessly with BuckleScript", you mean that the "bsb" command will just take the output of the previous ReasonML build as its input to produce JavaScript?

Actually Reason's build system can choose bucklescript as its output directly, it has first-class support.  However you could indeed translate reason -> ocaml -> bucklescript pretty easy anyway. 


On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote: 
BTW, I see one of the reasons that BuckleScript produces JavaScript that is so much faster than that produced by Elm:  you use JavaScript Array's as the base for almost all of the data structures whereas Elm uses tagged objects, which are very slow (at least on Google Chrome V9/nodejs); it almost seems like the Elm compiler just outputs dynamically typed code with the "all data is just one type - an object with tags" model, although it does recognize that numbers, characters, and strings will be picked up by JavaScript and don't need to be wrapped in a tagged wrapper object.

That is one person, but not the only one.  A bigger reason is that bucklescript 'types' the javascript code, such as by putting `| 0` on integer operations and such, which allow most javascript VM's (V8 and especially Firefox's) to pre-JIT them significantly more efficiently.  However it has more 'typing' that it can do, which I expect over time.  However yes the array's are more efficient, and it matches what OCaml itself does (array's of bytes to store data).
 
 
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:
Its too bad you can't apply your compiler optimizations to Elm.  Perhaps they could be, as Elm does produce an AST and is a very simple language so presumably the AST is fairly simple too.

Bucklescript does not do the optimizations itself (well it does some javascript-specific one like higher arity functions and so forth), most of it is OCaml.  *However*, you could write an Elm compiler that compiles to OCaml (which then could be compiled to native or to javascript-via-bucklescript), but the syntax's are so close as it is that it would be easier to just write an elm-like library for OCaml instead.
 

On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:
As an aside, have you ever looked at using an asm.js back end as current mainline browsers mostly support it, and if so did it make any different to speed?

Actually that is what the typing stuff I mentioned above is, it is the asm.js decorations (though not full asm.js, things like '|0' is pretty universally supported, plus it is all backwards compatible).  I could imagine OCaml compiling directly to webassembly later anyway, but for working with javascript being able to see the (very readable) output javascript via bucklescript is unmatched in usefulness.

Bob Zhang

unread,
Jan 12, 2017, 11:09:49 AM1/12/17
to Elm Discuss
We do have around 12 passes to optimize js code, will have more advanced optimizations coming in 1.5
Note the optimization is written carefully so it will not slow down compile time. To squeeze the compiler performance, some performant critical code was changed to c in recent release
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.

GordonBGood

unread,
Jan 12, 2017, 11:25:34 PM1/12/17
to Elm Discuss
On Thursday, 12 January 2017 22:42:05 UTC+7, OvermindDL1 wrote:
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:
On your advice, I've reread the ReasonML documents as I got mislead by looking at ReasonML/Rebel and that is at too early a stage for me to consider (also support for BuckleScript is currently broken).  I liked the ReasonML syntax much better than OCaml as it is more consistent, although it still has curly brackets and semicolons instead of white space delimited block, I can live with that as long as I have before as long as it is consistent.  Unfortunately, I can't get the ReasonProject to install on my machine (Windows 10 64-bit) and I am not likely to pursue it at this time as it isn't that important to me.

I'm actually not a fan of the Reason syntax myself, I consider it horribly verbose and noisy, but then again I've been comfortable with OCaml/SML syntax for over a decade now.  However correct, Reason has no Windows support 'yet' (it is on their roadmap), however bucklescript works perfectly with its HEAD right now elsewhere (most of the windows issues are, oddly enough, because of npm stupidity actually).  Bucklescript itself supports windows absolutely perfectly though (I was noisy about it at first when it was not  ^.^).

Thanks for the input on ReasonML - that explains why I wasn't able to install it on Windows, and if it is so "noisy" then I probably don't really want it anyway.  Unlike you, I don't have 10 years with OCaml, although I do have many years with F# pretty much since its beginning which I likely why I find Ocaml a bit archaic as F# modernized it.  Anyway, Ocaml/BuckleScript is usable to me, and as Hongbo says, will get better.
 
On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote: 
BTW, I see one of the reasons that BuckleScript produces JavaScript that is so much faster than that produced by Elm:  you use JavaScript Array's as the base for almost all of the data structures whereas Elm uses tagged objects, which are very slow (at least on Google Chrome V9/nodejs); it almost seems like the Elm compiler just outputs dynamically typed code with the "all data is just one type - an object with tags" model, although it does recognize that numbers, characters, and strings will be picked up by JavaScript and don't need to be wrapped in a tagged wrapper object.

That is one reason, but not the only one.  A bigger reason is that bucklescript 'types' the javascript code, such as by putting `| 0` on integer operations and such, which allow most javascript VM's (V8 and especially Firefox's) to pre-JIT them significantly more efficiently.  However it has more 'typing' that it can do, which I expect over time.

Yes, I saw the hints BuckleScript adds for the JIT compiler, which have become a somewhat standard ooutput of transpilers to JavaScript.

 On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:
Its too bad you can't apply your compiler optimizations to Elm.  Perhaps they could be, as Elm does produce an AST and is a very simple language so presumably the AST is fairly simple too.

Bucklescript does not do the optimizations itself (well it does some javascript-specific one like higher arity functions and so forth), most of it is OCaml.  *However*, you could write an Elm compiler that compiles to OCaml (which then could be compiled to native or to javascript-via-bucklescript), but the syntax's are so close as it is that it would be easier to just write an elm-like library for OCaml instead.

That is something like what Fable has done with its Arch and Elmish libraries.  Personally, I don't have much interest in this as I see using BuckleScript only as a stop gap to fill in where Elm lacks, primarily for writing particular kinds of modules where Elm produces code that is too slow; I would prefer to use Elm as the front end with the promise that eventually it will produce code fast enough so that the use of BuckleScript is not necessary other than for the occasion use to write libraries that need pure JavaScript to work.
 
 On Wednesday, January 11, 2017 at 10:43:11 PM UTC-7, GordonBGood wrote:
As an aside, have you ever looked at using an asm.js back end as current mainline browsers mostly support it, and if so did it make any different to speed?

Actually that is what the typing stuff I mentioned above is, it is the asm.js decorations (though not full asm.js, things like '|0' is pretty universally supported, plus it is all backwards compatible).  I could imagine OCaml compiling directly to webassembly later anyway, but for working with javascript being able to see the (very readable) output javascript via bucklescript is unmatched in usefulness.

Oh, I don't want to dump the JavaScript output as it is very useful to be able to easily read what the compiler has done; I am just suggesting that in the future there might be an option for BuckleScript to output pure asm.js (and even later wasm, of course) and I wonder if there have been any experiments to show what the gains in speed might be for full asm.js.

Further to that and improving the Elm output as mentioned above, I don't quite see why it is such a "huge project" to improve Elm's code generator, given that the Elm AST preserves type information.  The current code generator is only a couple of thousand lines of Haskell code, and it wouldn't take that long to re-write the whole thing (a month or two?).  One of the things that make it relatively easy to do this for Elm is that there are so few allowed types and so few allowed type classes (which can't be expanded):  the type keyword only defines new tagged unions and has no other use - no classes, no interfaces, thus no methods an all that OOP complexity, etc.  The nice thing about working on the code generator is that it doesn't affect any other work going on - Drop in a well-tested new code generator which might produce completely different JavaScript including a completely different memory model and it doesn't affect any other module or library.  Also, having such a code generator as a model would get much closer to being able to output pure asm.js or wasm in the future.

OvermindDL1

unread,
Jan 13, 2017, 10:18:48 AM1/13/17
to Elm Discuss
That is actually why I think Elm should compile to a a different back-end, like ocaml/bucklescript or so.  The syntax is uniform enough that making an elm->ocaml/bucklescript transpiler would be just a matter of re-using most of the existing parser in OCaml, which is already beyond blazing fast in comparison.  It would significantly reduce elm's compiling time, it would get it to a back-end that has far far more optimizing passes than elm itself does while being substantially better tested, and it would give a method of being able to compile elm to bare-metal for very fast server-side page generation.  I'm actually using my elm'y library that I built on bucklescript in my work project (recompilations taking ~100ms compared to the elm project taking over 40s for the same code was the big reason why, reduced turn-around time substantially), but it would be much nicer just to use elm itself built on such a better compiler instead.

And for note, OCaml's compiler is pluggable, you can plug in to many steps during compilation, bucklescript just plugs in to part of that pipeline, reason plugs into a higher part.  The highest part is called a 'pp' (pre-processor, bucklescript is a 'ppx' for comparison, reason is a 'pp'), and elm could easily be such an OCaml pre-processor, taking the elm code and massaging it to OCaml's, the whole infrastructure is built for such things, and that combined with elm's versioning (the ocaml compiler can even output interface definitions for files to make calculating this to be trivial) and packaging system would absolutely blow everything else away, especially if you could fall back to OCaml code for things that are difficult or impossible to do on the Elm side.  ^.^

Rupert Smith

unread,
Jan 13, 2017, 10:40:07 AM1/13/17
to Elm Discuss
On Friday, January 13, 2017 at 3:18:48 PM UTC, OvermindDL1 wrote:
That is actually why I think Elm should compile to a a different back-end, like ocaml/bucklescript or so.  The syntax is uniform enough that making an elm->ocaml/bucklescript transpiler would be just a matter of re-using most of the existing parser in OCaml, which is already beyond blazing fast in comparison.  It would significantly reduce elm's compiling time, it would get it to a back-end that has far far more optimizing passes than elm itself does while being substantially better tested, and it would give a method of being able to compile elm to bare-metal for very fast server-side page generation. 

Now you're talking. 

All that this would take would be to write an Elm parser into the first stage of the OCaml pipeline? You'd also need to compile the Native modules, is there already some way to feed them into the Ocaml pipeline?

OvermindDL1

unread,
Jan 13, 2017, 11:01:01 AM1/13/17
to Elm Discuss
On Friday, January 13, 2017 at 8:40:07 AM UTC-7, Rupert Smith wrote:
All that this would take would be to write an Elm parser into the first stage of the OCaml pipeline? You'd also need to compile the Native modules, is there already some way to feed them into the Ocaml pipeline?

There are 2 first stages depending on how you look at it.

The real first-first stage is not so much as a compiler plugin as it is just a generic pre-processor, things like Caml4p(sp?) are this, in a high level over-view:
```sh
ocamlc -pp myPP ...
```
Although using a normal build system (bucklescript 'bsb' is fantastic!), but think of `myPP` just as a normal binary, you pass in file contents (source code) via STDIN and it passes back out changed text via STDOUT.  It is basically the same as preprocessing each file to another directory then compiling 'those' with ocamlc (except faster since ocamlc gets information from it and keeps it loaded.

A 'ppx' is an actual compiler plugin, passed the same way:
```sh
ocamlc -ppx myPPX
```
It is also basically a binary that operates in the same way as above, except it operates on the OCaml AST at various stages in the pipeline (whatever hooks you register, you can have multiple), it translates the AST and returns the altered AST so the compiler continues on.

Bucklescript plugs late in the compiler to take the AST and output it to javascript for example.  Reason is a preprocessor only as it uses non-ocaml syntax (it outputs OCaml syntax).  Elm on OCaml would just be a preprocessor as well (potentially with some ppx's as well if you want to add some compilation pass stuff), all of which would be handled by the build system of course.  Basically bucklescript as an example has a few files like 'bsc' and 'bsdep' and 'bsppx' and such, the bsppx is the actual ppx that handles some pass transformation, bsc is the actual compiler (which just wraps the ocaml compiler while adding bucklescript transformers), etc...  The file 'bsb' is the build system (built using the fast Ninja build system) that handles the json file that defines a bucklescript project to automatically handle everything as a build system should (its version of elm-make you can think of it as).

Elm could comparatively easily become a preprocessor with its own wrappers to handle its building, the interaction with elm would remain the same, it would just become significantly faster in compilation and execution speed while opening it up to native compilation as well (or if you use node just compile out via bucklescript to node as normal there as well).  This way elm remains the tight, highly focused language specific for this purpose, but you could even make normal OCaml code that calls into Elm or vice-versa (an elm layer could enforce that any function that elm code calls out to is 'pure' for example, or not, whatever).

Joey Eremondi

unread,
Jan 13, 2017, 12:10:28 PM1/13/17
to elm-d...@googlegroups.com
Even a direct Elm to OCaml translation wouldn't be too hard. Elm is not the same as OCaml, but my understanding is that most of its features are included in Elm (row polymorphism, strict evaluation, first-class-functions). OCaml has lots of features Elm doesn't want (like mutable references) but that's not a problem, and could even allow for some nice backend optimizations.

This would also provide a really nice way to do Elm on the backend. The big question is, how to write such a translator? Are the Haskell libraries for generating OCaml? Or would the compiler need to be written in OCaml?

--
You received this message because you are subscribed to the Google Groups "Elm Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elm-discuss+unsubscribe@googlegroups.com.

OvermindDL1

unread,
Jan 13, 2017, 12:38:03 PM1/13/17
to Elm Discuss
On Friday, January 13, 2017 at 10:10:28 AM UTC-7, Joey Eremondi wrote:
Even a direct Elm to OCaml translation wouldn't be too hard. Elm is not the same as OCaml, but my understanding is that most of its features are included in Elm (row polymorphism, strict evaluation, first-class-functions). OCaml has lots of features Elm doesn't want (like mutable references) but that's not a problem, and could even allow for some nice backend optimizations.

This would also provide a really nice way to do Elm on the backend. The big question is, how to write such a translator? Are the Haskell libraries for generating OCaml? Or would the compiler need to be written in OCaml?

No clue about Haskell libraries, but at the very least you could just do a textual translation from Haskell.  However by using the OCaml compiler libraries you'd save time and it would save a lot of code regardless.

And yep, OCaml has everything that Elm uses and a lot more that could be used for some optimizations, but to start a direct translation would be easy.

Even the direct OCaml code is 'almost' identical to Elm.  You know the normal ELM Form example at http://elm-lang.org/examples/form?  Here is the same thing in OCaml:
'Almost' identical, and some of the non-identical parts are just minor API changes in this example.

But yes, translating Elm to OCaml/Bucklescript would not at all be a hard task.  :-)

GordonBGood

unread,
Jan 13, 2017, 8:51:39 PM1/13/17
to Elm Discuss
Elm "Native" libraries are JavaScript, and that is what BuckleScript does: as well as output JS code, it also has JS FFI to allow BuckleScript code to call to/receive calls from JS code.  I think this would be handled by an Elm PP just as OCaml handles FFI references -  leaving FFI blanks to be later "filled in" by later passes.

GordonBGood

unread,
Jan 13, 2017, 11:02:38 PM1/13/17
to Elm Discuss


On Saturday, 14 January 2017 00:38:03 UTC+7, OvermindDL1 wrote:
But yes, translating Elm to OCaml/Bucklescript would not at all be a hard task.  :-)

An Elm front-end to OCaml sounds interesting and I especially like the sound of "faster compilation speed" (about 400 times faster???).  However, although I can write some OCaml code, I am not equipped to help on this project because I don't have your (and Hongbo's) knowledge about the inner workings of the OCaml compiler.  That said, I think that there is one area of optimization where such a front-end will have problems just as BuckleScript does now:  Elm's data structures are all guaranteed immutable where Ocaml/BuckleScript's are not - this means that BuckleScript needs to create closures around any captured free variables unless it does special processing to ensure that these to not mutate within the scope of use of the closure, which could be wide; Elm's data structures are guaranteed immutable and any immutability introduced will be local in scope as just for such things such as tail call optimization in turning tail calls into loops with mutable loop variables taking the place of what would normally be function arguments.

My idea was just to either re-write or make improvements to the Elm back end of the current Elm compiler, which I can handle as I have enough Haskell, but although I might be able to make the resulting output JavaScript satisfactory as to run speed, this would not likely help with speed of compilation, which I assume is the result of slow steps leading from parsing to producing the AST (???).  From a quick look at the code, there seem to be several passes but only one that does basic optimization before the AST is generated.  Some optimizations such as inlining and the limited tail call optimizations it does are part of the back end.  I suppose this general structure is common to most compilers just as for OCaml other than the number of optimization passes and other than that Elm does not have a pass plug-in capability.

I'm wondering why the Elm compiler is so slow at parsing if that is where the slow-down is:  There is no basic reason that Haskell code output needs to be any slower than OCaml code output; the usual reason for such a dramatic slow-down is often that the author(s) haven't paid careful enough attention to non-strict processing and have let it get away on them, which is usually seen by huge heap memory use and garbage collection taking a huge portion of the execution time.  These sorts of problems can often be fixed by forcing/hinting strictness at key junctures in the code.  This should be investigated, as some simple changes could make all the difference.  If this could be done simply, writing an Elm pre-parser for OCaml may be overkill, although it would have the advantages of a more optimized back-end (BuckleScript) already in the works.

I guess what I am saying is that I am not quite ready to give up on the current Elm compiler, and if no one else has the time to investigate this front end slowness, it seems I may have to look into it myself.

Bob Zhang

unread,
Jan 13, 2017, 11:49:48 PM1/13/17
to Elm Discuss

Note by default OCaml is immutable, purely functional style does not guarantee more optimizations opportunities, if you profile real world OCaml programs vs real world Haskell programs you know what I mean.
Let's forget about JS backend, in practice, OCaml native backend runs faster is mostly due to that programmers have more precise control over memory allocation, you can inline assembly in some performant critical path and it does matter a lot in practice
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.

Richard Feldman

unread,
Jan 14, 2017, 12:40:53 AM1/14/17
to Elm Discuss
Keep in mind that code is the easy part; the major thing standing between Elm and a different compilation target than JavaScript is 1-2 years of design work to figure out a quality user experience.
 
I'm wondering why the Elm compiler is so slow at parsing if that is where the slow-down is

Evan recently rewrote the parser to be much faster.

You can try a preview binary of the new version if you're curious. :)

GordonBGood

unread,
Jan 14, 2017, 5:13:40 PM1/14/17
to Elm Discuss
On Saturday, 14 January 2017 12:40:53 UTC+7, Richard Feldman wrote:
Keep in mind that code is the easy part; the major thing standing between Elm and a different compilation target than JavaScript is 1-2 years of design work to figure out a quality user experience.

Yes, Richard, I agree with you (and Evan) that "code is the easy part".  I don't want a different compilation target other than JavaScript, although eventually asm.js and wasm will likely be a requirement; I just want the current Elm compiler to produce as fast JS as possible.  That's why it seemed to me that a re-write of the current code generator to consider types (it seems available from the AST) to produce more efficient JS would be just drop in code that would not affect anything else.
 
I'm wondering why the Elm compiler is so slow at parsing if that is where the slow-down is

Evan recently rewrote the parser to be much faster.

You can try a preview binary of the new version if you're curious. :)

I saw that over on elm-dev, but haven't tried it because compilation speed isn't a problem for the Elm code I have written so far.  The only reason I brought it up is OvermindDL1's comment that compiling a Ocaml/BucketScript code (that presumably did the same thing as the Elm code) took about 0.1 seconds as compared to 40 seconds with the Elm compiler - a 400 times speed-up!  We weren't given details of the code or test conditions and whether one was an incremental compilation, but that sounds quite serious and would affect the usability of Elm.  If that data is verifiable, a speed up of double or even quadruple doesn't begin to touch the difference and should be investigated.

Richard Feldman

unread,
Jan 14, 2017, 11:14:55 PM1/14/17
to Elm Discuss
I'm wondering why the Elm compiler is so slow at parsing if that is where the slow-down is

Evan recently rewrote the parser to be much faster.

You can try a preview binary of the new version if you're curious. :)

I saw that over on elm-dev, but haven't tried it because compilation speed isn't a problem for the Elm code I have written so far.  The only reason I brought it up is OvermindDL1's comment that compiling a Ocaml/BucketScript code (that presumably did the same thing as the Elm code) took about 0.1 seconds as compared to 40 seconds with the Elm compiler - a 400 times speed-up!  We weren't given details of the code or test conditions and whether one was an incremental compilation, but that sounds quite serious and would affect the usability of Elm.  If that data is verifiable, a speed up of double or even quadruple doesn't begin to touch the difference and should be investigated.

If only there were a binary posted somewhere, based on a compiler that had just been rewritten to improve build times, so that someone could post a benchmark instead of speculation! ;)

Bob Zhang

unread,
Jan 14, 2017, 11:56:18 PM1/14/17
to Elm Discuss
I would be very surprised if parsing is the bottle neck.
In most cases, type checking and register allocations (could be quadratic) takes much more time. OCaml's type checking algorithms is very clever, almost linear in most practical use cases.
--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.

GordonBGood

unread,
Jan 15, 2017, 1:07:40 AM1/15/17
to Elm Discuss
Yes to that.  If someone has a concrete example of code that takes much longer to compile that the same implemented in another equivalent language such as BucketScript, I would like to see it.  I expect BuckleScript to compile some faster due to its OCaml origins being finely tuned for compilatoin speed, but i wouldn't expect it to be more than four or five time faster, and as you say, Evan has done something that makes this quite a bit faster.  Theoretically, Elm should be able to compile fast due to the simplicity of the language syntax.

Meanwhile, I think I'll just take a look at seeing how hard it would be to improve the code generation, since that doesn't really impact anything else. 

GordonBGood

unread,
Jan 15, 2017, 1:43:02 AM1/15/17
to Elm Discuss
On Sunday, 15 January 2017 11:56:18 UTC+7, Bob Zhang wrote:
I would be very surprised if parsing is the bottle neck
In most cases, type checking and register allocations (could be quadratic) takes much more time. OCaml's type checking algorithms is very clever, almost linear in most practical use cases.

Yet Evan did some relatively minor changes to parsing which is said to have over doubled compilation speed.   If parsing is usually a minor part, that would seem to say that the other parts are even faster.

It would seem that Elm does all of its type checking in the front end and almost nothing is done with types in the code generation.  Elm basically has fixed types, as even the "type" keyword can only currently be used to define variations of tagged unions although such new types can be nested to any level, and if nested deeply I suppose could take a large amount of time to type check the structure if not done cleverly.

I don't want to dig into the front end too much other than to try to nail down any problems as I think it is beyond my capabilities, but I plan to look into the code generator in more depth this next week.  Much of my experience is dealing with low level code.

art yerkes

unread,
Jan 15, 2017, 3:13:22 AM1/15/17
to elm-d...@googlegroups.com
After working with the elm compiler just a bit from the outside, I think there might be build time improvements to be had by improving the build info dropped in elm-stuff.  I think reducing the number of graph.dat and .elmi files read during the build process might help.

--
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss+unsubscribe@googlegroups.com.

Bob Zhang

unread,
Jan 15, 2017, 2:54:17 PM1/15/17
to elm-d...@googlegroups.com
Hi Gordon, 
In case you are interested, this is a link I made to show the excellent compile time performance of BuckleScript (https://twitter.com/bobzhang1988/status/810508070350680066
It takes around 0.58s to do a clean build: 55 modules on Mac Book Pro 13 model (cold start, no caching). 
It is not uncommon to see 100 times slowness when building Elm vs OCaml in incremental build (dev time), the reason is that Elm (correct me if I am wrong) always need a link time, so whenever your change a file, it will trigger the linker, this will get significantly worse if your project is in not small. While BuckleScript compiles one OCaml module to one ES6 module, it does not need link during dev time, best for incremental build.

-- 
You received this message because you are subscribed to a topic in the Google Groups "Elm Discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elm-discuss/Um7WIBTq9xU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elm-discuss...@googlegroups.com.

GordonBGood

unread,
Jan 15, 2017, 9:23:55 PM1/15/17
to Elm Discuss
On Monday, 16 January 2017 02:54:17 UTC+7, Bob Zhang wrote:
Hi Gordon, 
It is not uncommon to see 100 times slowness when building Elm vs OCaml in incremental build (dev time), the reason is that Elm (correct me if I am wrong) always need a link time, so whenever your change a file, it will trigger the linker, this will get significantly worse if your project is in not small. While BuckleScript compiles one OCaml module to one ES6 module, it does not need link during dev time, best for incremental build.

Hi Hongbo, that was interesting, especially in what you way about why Elm may compile very much slower then OCaml.  If the difference is just link time (which linking for all makes it inherits from haskell) then this is surely fixable?  Can't Elm do the same as BucketScript - compile to ES6 modules?

In fact, I don't see why the insistence that these JS compiler languages compile to ES5 at all:  it made sense in 2012 with the Elm project was started, but now all mainline browsers and platforms have been updated to near 100% ES6 compatibility.  Or do the compilers such as Elm want to continue to support Internet Explorer (which will never be updated) forever?

Bob Zhang

unread,
Jan 15, 2017, 10:11:51 PM1/15/17
to Elm Discuss
Sorry, I didn't mean es6 module, I meant map one ocaml (or elm) module to one runnable js module(amdjs, commonjs, or es6) so you get separate compilation.

This is just my observation , please take it with a grain of salt: in general, it takes 20ms-80ms for BuckleScript to compile a single file, but that's almost all you need for BuckleScript dev feedback loop, in elm, it's different, you need recompile that module and regenerate a monolithic js file, the larger project it gets , the worse compilation time you get in elm mode. If you have experience in C++, you know the bottle neck used to be linking, it is quite common for a c++ project to spend 20minutes in linking.

I don't how much elm is performance engineered, but we are very sensitive about performance even in nanoseconds level (https://github.com/bloomberg/bucklescript/pull/1082), this little engineering will add up
--

Richard Feldman

unread,
Jan 15, 2017, 10:21:28 PM1/15/17
to Elm Discuss
you need recompile that module and regenerate a monolithic js file, the larger project it gets , the worse compilation time you get in elm mode. If you have experience in C++, you know the bottle neck used to be linking, it is quite common for a c++ project to spend 20minutes in linking.

Considering Evan is working on asset management for the next release, I doubt "compile everything to one file" will be true after it lands. (That release is presumably still several months away; this is just speculation on my part.)

Evan also wrote C++ at Google, so avoiding long linking times is definitely on his radar. ;)

GordonBGood

unread,
Jan 16, 2017, 1:09:48 AM1/16/17
to Elm Discuss
Richard and Hongbo, I'm relieved to know that others are aware of the long compilation times = linking times for big projects and that it will likely get fixed in the next several months.  I'm not going to worry about it in that case, and Hongbo's information on how little time it can take as OCaml does it is comforting, as there then is no reason that Even can't come up with something similar.  I'll just look into if I can help on the speed of the generated code from the back end, which is something for which I may be able to help, and which Even doesn't have time to look into currently.  It may turn out to be too much for me, but at least I'll know.

Bob Zhang

unread,
Jan 16, 2017, 8:07:56 AM1/16/17
to Elm Discuss
This will be a fundamental change to the architecture of the elm compiler, it is cool to see elm is also moving in this direction in the next several months!
--

Rupert Smith

unread,
Jan 16, 2017, 9:09:43 AM1/16/17
to Elm Discuss
On Saturday, January 14, 2017 at 1:51:39 AM UTC, GordonBGood wrote:
All that this would take would be to write an Elm parser into the first stage of the OCaml pipeline? You'd also need to compile the Native modules, is there already some way to feed them into the Ocaml pipeline?

Elm "Native" libraries are JavaScript, and that is what BuckleScript does: as well as output JS code, it also has JS FFI to allow BuckleScript code to call to/receive calls from JS code.  I think this would be handled by an Elm PP just as OCaml handles FFI references -  leaving FFI blanks to be later "filled in" by later passes.

Yes, that is what I was getting at. If you compile Elm -> Ocaml -> Javascript, then the Native stuff in Elm just gets copied into the output javascript directly (unless of course passing it through Ocaml is worthwhile and can optimize it). But if you are going Elm -> Ocaml -> native x86_64 binary, then the javascript needs to be compiled through Ocaml. So I was just wondering if the Ocaml tool-chain already has a javascript front-end for it?

GordonBGood

unread,
Jan 17, 2017, 12:23:33 AM1/17/17
to Elm Discuss
BuckleScript is an OCaml to JavaScript back end, and we were talking about the possibility of an Elm pre-parser/front end to OCaml in order to get more efficient JavaScript code from Elm source than the current Elm compiler can produce; although this is possible, it would still take up to a year for someone to code and test it.

Even though somewhat time consuming, the above isn't that hard because Elm is both a simpler language than OCaml and its features are generally just simpler subsets of OCaml features.  Writing a JavaScript front end for OCaml would be an ambitious undertaking because JavaScript does not resemble OCaml in the least and there wouldn't seem much point:  One would have the slow compile speeds of Elm plus the compilation of JavaScript to OCaml to more efficient JavaScript through BuckleScript  As to the possibility of generating machine code output, again there doesn't seem to be much point as that is not Elm's purpose and one would be better to use OCaml or some other similar language directly.

In general, compiling a statically typed language such as Elm or OCaml to a dynamically typed language like JavaScript isn't so hard as the compiler front end just enforces static type checking and lets the dynamic language run time do its dynamic typing again but going the other way wouldn't make much sense, so I doubt there is a JavaScript front end for OCaml.  The ReasonML front end/pre-parser also just takes OCaml-like code that is more consistent and perhaps more easily readable and turns it into OCaml code, but that it relatively easy as they are similar.  However, even that has taken about a year thus far.

Rupert Smith

unread,
Jan 17, 2017, 6:25:08 AM1/17/17
to Elm Discuss
On Tuesday, January 17, 2017 at 5:23:33 AM UTC, GordonBGood wrote:
On Monday, 16 January 2017 21:09:43 UTC+7, Rupert Smith wrote:
On Saturday, January 14, 2017 at 1:51:39 AM UTC, GordonBGood wrote:
All that this would take would be to write an Elm parser into the first stage of the OCaml pipeline? You'd also need to compile the Native modules, is there already some way to feed them into the Ocaml pipeline?

Elm "Native" libraries are JavaScript, and that is what BuckleScript does: as well as output JS code, it also has JS FFI to allow BuckleScript code to call to/receive calls from JS code.  I think this would be handled by an Elm PP just as OCaml handles FFI references -  leaving FFI blanks to be later "filled in" by later passes.

Yes, that is what I was getting at. If you compile Elm -> Ocaml -> Javascript, then the Native stuff in Elm just gets copied into the output javascript directly (unless of course passing it through Ocaml is worthwhile and can optimize it). But if you are going Elm -> Ocaml -> native x86_64 binary, then the javascript needs to be compiled through Ocaml. So I was just wondering if the Ocaml tool-chain already has a javascript front-end for it?

Writing a JavaScript front end for OCaml would be an ambitious undertaking because JavaScript does not resemble OCaml in the least and there wouldn't seem much point:

Yes, that is what I thought. I probably missed some context out when quoting, but my question was in response to OvermindDL1's suggestion that moving to OCaml would open up the possibility of compiling to different back-ends other than javascript.

An alternative might be to re-write the Native modules in the Elm core in OCaml. There isn't a huge amount of it.

OvermindDL1

unread,
Jan 17, 2017, 11:27:36 AM1/17/17
to Elm Discuss
On Saturday, January 14, 2017 at 3:13:40 PM UTC-7, GordonBGood wrote:
I saw that over on elm-dev, but haven't tried it because compilation speed isn't a problem for the Elm code I have written so far.  The only reason I brought it up is OvermindDL1's comment that compiling a Ocaml/BucketScript code (that presumably did the same thing as the Elm code) took about 0.1 seconds as compared to 40 seconds with the Elm compiler - a 400 times speed-up!  We weren't given details of the code or test conditions and whether one was an incremental compilation, but that sounds quite serious and would affect the usability of Elm.  If that data is verifiable, a speed up of double or even quadruple doesn't begin to touch the difference and should be investigated.

It was a rewrite of a messaging system at work, it was in elm but we had issues that necessitated the use of too many ports, the new one written in OCaml (transpiled to javascript by Bucklescript) does the same thing in about the same lines of code, except no javascript was used at all.  The lines of code is hovering around 6k for both versions, a compile for both is done just by calling elm-make or bsb respectively and those are the output times for a clean compile (no cache for either) with both spread across about 31 source files.  I'm not sure why the elm compilation is so slow, the compilation is happening on Windows so that could be a factor.  And sadly no, cannot release work code without consent.  ;-)

A note though, the elm code was severely hampered by the type system in Elm (lack of HKT's or HPT's) so had to get creative with use of records and unions, so it is entirely possible we hit a very slow case, in the OCaml code we used some GADT's and polymorphic modules so as to not resort to those hacks.


On Saturday, January 14, 2017 at 9:14:55 PM UTC-7, Richard Feldman wrote:
If only there were a binary posted somewhere, based on a compiler that had just been rewritten to improve build times, so that someone could post a benchmark instead of speculation! ;)

I might be able to test out the new compiler, still have the old elm code in the repo (just that chunk is unused, we are still using other parts of Elm that are not as large and not as full-of-ports) if really curious? 


On Sunday, January 15, 2017 at 8:21:28 PM UTC-7, Richard Feldman wrote:
you need recompile that module and regenerate a monolithic js file, the larger project it gets , the worse compilation time you get in elm mode. If you have experience in C++, you know the bottle neck used to be linking, it is quite common for a c++ project to spend 20minutes in linking.

Considering Evan is working on asset management for the next release, I doubt "compile everything to one file" will be true after it lands. (That release is presumably still several months away; this is just speculation on my part.)

Ooo, now that would be awesome.  :-)


On Tuesday, January 17, 2017 at 4:25:08 AM UTC-7, Rupert Smith wrote:
Yes, that is what I thought. I probably missed some context out when quoting, but my question was in response to OvermindDL1's suggestion that moving to OCaml would open up the possibility of compiling to different back-ends other than javascript.

An alternative might be to re-write the Native modules in the Elm core in OCaml. There isn't a huge amount of it.

Precisely this, I've already done quite a large chunk of it as a test and it translates very easily (and becomes type safe, which Elm's is not as I've hit 'undefined's in pure elm code before (already in the bug tracker, but why did they happen at all?!)).  I kept it identical to the Elm API as well, though if I broke Elm's API in a couple of minor ways then I could *substantially* reduce the number of allocations done...  But yes, I've rewrote most of Elm's native Core code as well as some third-party libraries like navigation, all without a touch of javascript and all of it type safe the whole way, mostly playing around but we ended up using a lot of it at work anyway (I still need to get around to cleaning it up and releasing it...).  An Elm->OCaml transpiler is entirely possible and I've no doubt it would take substantially less than a year to do (if I could get my work to pay for it I'd guess probably two weeks at most, though if everything goes well I'd guess two days for the language conversion and the rest on filling out the rest of the API that I've not yet done so as to remove all javascript, which would make it compileable to native code as well once system checks were added in, another chunk of time for that but not much).

Rupert Smith

unread,
Jan 18, 2017, 11:53:31 AM1/18/17
to Elm Discuss
On Tuesday, January 17, 2017 at 4:27:36 PM UTC, OvermindDL1 wrote:
On Tuesday, January 17, 2017 at 4:25:08 AM UTC-7, Rupert Smith wrote:
An alternative might be to re-write the Native modules in the Elm core in OCaml. There isn't a huge amount of it.

Precisely this, I've already done quite a large chunk of it as a test and it translates very easily (and becomes type safe, which Elm's is not as I've hit 'undefined's in pure elm code before (already in the bug tracker, but why did they happen at all?!)).  I kept it identical to the Elm API as well, though if I broke Elm's API in a couple of minor ways then I could *substantially* reduce the number of allocations done...  But yes, I've rewrote most of Elm's native Core code as well as some third-party libraries like navigation, all without a touch of javascript and all of it type safe the whole way, mostly playing around but we ended up using a lot of it at work anyway (I still need to get around to cleaning it up and releasing it...).

In my view, this also provides very good justification for not allowing native code into packages.elm-lang.org. Porting Elm to another platform in this way is manageable.

OvermindDL1

unread,
Jan 18, 2017, 12:56:08 PM1/18/17
to Elm Discuss
On Wednesday, January 18, 2017 at 9:53:31 AM UTC-7, Rupert Smith wrote:
In my view, this also provides very good justification for not allowing native code into packages.elm-lang.org. Porting Elm to another platform in this way is manageable.

Entirely yeah, native code even in core elm is buggy.  In my translations I wrote very thin FFI calls to the base-most javascript calls (mostly DOM stuff), fully typed, then built everything on those typed interfaces (it is very typescript'y/purescript'y at that point), but I've had no undefined's, no oddities like that at all, and the old 'native' parts were entirely in OCaml this time, just bound to those very few fully typed FFI calls (which I need to clean them up too, newer versions of bucklescript has simplified making them since I wrote this file).  For example, here is the web element node interaction FFI l made (slightly older version as the one at work has a bit more and is a bit more clean), these are the kind of things user code should not write, and indeed an Elm->OCaml compiler should not have such functionality exposed but rather should call to things exposed from OCaml that may use things like this:  https://github.com/OvermindDL1/bucklescript-testing/blob/master/src/web_node.ml (and ignore the polyfill at the bottom, I later converted that to pure OCaml/Bucklescript as well)

GordonBGood

unread,
Jan 23, 2017, 12:52:12 PM1/23/17
to Elm Discuss
I both like what you have to say about why writing OCaml code is so much more efficient than writing Elm code (and working with Elm in the last week agree completely with you) but also despair that it is true as to all of the warts in the Elm compiler

I think everyone hangs around this and other Elm discussion forums because we love the Elm concept of a simple functional language that promises to eliminate the need to deal with JavaScript; but unfortunately, due to the compiler shortcomings, that promise is not delivered other than for the most basic of programs.  I'm glad to see that Evan resists adding everyone's favourite feature to the language and actually continues to reduce syntax to a bare core of functionality.

Ideally, the Elm compiler would get completely re-written to both deal with the compilation speed issues (hopefully this work on "asset management" will handle that), but also to use the available static type information in the back end to generate much more efficient JS code as does BuckleScript (in most cases).  This will be even a larger project as in order to get real JS code speed improvements for some cases, the memory model will have to be completely changed to something (or exactly) that of the BuckleScript back end.  Come now, test tagged JS records as the primary data structure?  So (as Even said) this is a big project as changes will have to be made to pass the type information to the code generator back end ***and*** completely re-write the back end to use that type information and while we are at it may as well change the JS code memory model to use JS Array's (no text tags) as the primary data structure as does BuckleScript.  This may make the resulting JS code less debuggable, but that that isn't why we want to use Elm - we hope that all debugging can be done within the Elm environment.

Unfortunately and realistically, there seems to be only one major contributor to the Elm compiler and build system - Even himself - and he is under increasing pressure to do more timely updates in a variety of areas, not only as to code efficiency.  Also, the plan as proposed above requires changes in at least two major parts of the compiler:  the AST code builder and the back end Code Generator, so either one person needs to do both or there will be co-ordination involved.  This work would precede any other necessary work on further compiler optimization passes a la BuckleScript.

As you say, the easiest thing to do would be just write an Elm2OCaml stand alone program which could then easily become the "pp" front end to produce an alternative Elm compiler to so much more efficient JS code through BuckleScript, with even more BuckleScript optimizations promised in the near future (or a native code alternative back end).  Again as you say, it is very easy to write minimal JS interfaces in OCaml so that there then almost needs no Native code modules at all.

Unfortunately, if we do that in as short a time as you say is possible, work on the Elm compiler will likely never catch up to that effort, and Elm, the language, will become nothing but a language specification for an alternate front end to OCaml just as ReasonML is.  In a way, I'd be sorry to see that happen as Elm could be an independent language force in its own right.  Once Elm's core Native libraries have been re-written into the OCaml environment, the ease of use of the resulting combination will likely mean that most serious users will choose that development environment, which then splits development efforts, which was the cause of (at least near) death or many other capable languages (D comes to mind).

Perhaps this is the best alternative, as then Evan and other major contributors could concentrate on refining the language spec without the drain on their limited time to also work on the implementation of the language environment.

If we want to prevent this, we need more contributors to Evan's work on compiler upgrades, if that is possible, rather than an Elm front end for OCaml.

It seems to me that the old rule of not optimizing early doesn't apply to compilers, at least as to choice of memory model for the code generator and as to not thinking that type information is essential for efficient (and reliable) back end code.  Having to rectify those omissions now is a lot of work!

In fact, Fable is going through the same output code efficiency problems made worse because its goal is to support the full more-complex-then-Elm F# language specification:  its back end memory model is very similar to that of Elm and the resulting code is up to about six or seven times slower than as produced by the same algorithms for BuckleScript - something as Elm output code is; Fable also seems slow to compile.  One problem that both Fable and Elm must address is a consistent way to handle argument currying:  Fable does this by applying use cases, where some types of functions are always curried (with an execution time overhead) and other types are not; Elm handles this by allways pseudo-currying in using hidden wrapper JS functions to apply wrapped functions to fixed numbers of arguments at a time as determined by the program context, but again at a const in performance (although perhaps less than Fables more direct multi-level currying).

OvermindDL1

unread,
Jan 23, 2017, 1:26:49 PM1/23/17
to Elm Discuss
On Monday, January 23, 2017 at 10:52:12 AM UTC-7, GordonBGood wrote:
I think everyone hangs around this and other Elm discussion forums because we love the Elm concept of a simple functional language that promises to eliminate the need to deal with JavaScript; but unfortunately, due to the compiler shortcomings, that promise is not delivered other than for the most basic of programs.  I'm glad to see that Evan resists adding everyone's favourite feature to the language and actually continues to reduce syntax to a bare core of functionality.

Entirely yeah, TEA is a fantastic architecture .


On Monday, January 23, 2017 at 10:52:12 AM UTC-7, GordonBGood wrote:
Ideally, the Elm compiler would get completely re-written to both deal with the compilation speed issues (hopefully this work on "asset management" will handle that), but also to use the available static type information in the back end to generate much more efficient JS code as does BuckleScript (in most cases).  This will be even a larger project as in order to get real JS code speed improvements for some cases, the memory model will have to be completely changed to something (or exactly) that of the BuckleScript back end.  Come now, test tagged JS records as the primary data structure?  So (as Even said) this is a big project as changes will have to be made to pass the type information to the code generator back end ***and*** completely re-write the back end to use that type information and while we are at it may as well change the JS code memory model to use JS Array's (no text tags) as the primary data structure as does BuckleScript.  This may make the resulting JS code less debuggable, but that that isn't why we want to use Elm - we hope that all debugging can be done within the Elm environment.

One of Elm's big things is 'never needing to debug the javascript' though, so I could see it doing that.


On Monday, January 23, 2017 at 10:52:12 AM UTC-7, GordonBGood wrote: 
Unfortunately and realistically, there seems to be only one major contributor to the Elm compiler and build system - Even himself - and he is under increasing pressure to do more timely updates in a variety of areas, not only as to code efficiency.  Also, the plan as proposed above requires changes in at least two major parts of the compiler:  the AST code builder and the back end Code Generator, so either one person needs to do both or there will be co-ordination involved.  This work would precede any other necessary work on further compiler optimization passes a la BuckleScript.

That is why I think having Elm compile to another target would save Evan a tremendous amount of work.  A translation layer is quite a lot easier than a full compiler (especially the more similar the semantics already are, like Elm and OCaml very much are).  He'd be free to design the language much more easily.  OCaml already has fantastic error reports, although not 'quite' up to Elm's standards (but pretty close in many ways), and adding better error messages to the OCaml compiler would benefit a lot more people outside of Elm as well (plus the OCaml compiler is oddly fun to hack on, it is really well designed).


On Monday, January 23, 2017 at 10:52:12 AM UTC-7, GordonBGood wrote:
As you say, the easiest thing to do would be just write an Elm2OCaml stand alone program which could then easily become the "pp" front end to produce an alternative Elm compiler to so much more efficient JS code through BuckleScript, with even more BuckleScript optimizations promised in the near future (or a native code alternative back end).  Again as you say, it is very easy to write minimal JS interfaces in OCaml so that there then almost needs no Native code modules at all.

Bucklescript can already compile to OCaml.  Bucklescript adds some extensions (the OCaml language is extendable via ppx's as well) to do things like platform testing, so you can make code that compiles for the web do something else when on native (like output html or no-op for event registrations), and that ppx can be added to a normal OCaml compiler via the normal `-ppx` flag to add those extensions to a native compile as well, so you get free platform detection to write platform-specific code.


On Monday, January 23, 2017 at 10:52:12 AM UTC-7, GordonBGood wrote:
Unfortunately, if we do that in as short a time as you say is possible, work on the Elm compiler will likely never catch up to that effort, and Elm, the language, will become nothing but a language specification for an alternate front end to OCaml just as ReasonML is.  In a way, I'd be sorry to see that happen as Elm could be an independent language force in its own right.  Once Elm's core Native libraries have been re-written into the OCaml environment, the ease of use of the resulting combination will likely mean that most serious users will choose that development environment, which then splits development efforts, which was the cause of (at least near) death or many other capable languages (D comes to mind).

Yeah that is why I definitely agree that Elm should make OCaml a first-class back-end, that way compilation via bucklescript to javascript or ocamlc to native code is both easy, with all the optimizations afforded by the OCaml compiler on both.


On Monday, January 23, 2017 at 10:52:12 AM UTC-7, GordonBGood wrote:
Perhaps this is the best alternative, as then Evan and other major contributors could concentrate on refining the language spec without the drain on their limited time to also work on the implementation of the language environment.

If we want to prevent this, we need more contributors to Evan's work on compiler upgrades, if that is possible, rather than an Elm front end for OCaml.

This is also why I've not started such a project, I do not want it to distract from Elm.  The most I'll do (and am using) is my own OCaml TEA-like library for now.


On Monday, January 23, 2017 at 10:52:12 AM UTC-7, GordonBGood wrote:
It seems to me that the old rule of not optimizing early doesn't apply to compilers, at least as to choice of memory model for the code generator and as to not thinking that type information is essential for efficient (and reliable) back end code.  Having to rectify those omissions now is a lot of work!

In fact, Fable is going through the same output code efficiency problems made worse because its goal is to support the full more-complex-then-Elm F# language specification:  its back end memory model is very similar to that of Elm and the resulting code is up to about six or seven times slower than as produced by the same algorithms for BuckleScript - something as Elm output code is; Fable also seems slow to compile.  One problem that both Fable and Elm must address is a consistent way to handle argument currying:  Fable does this by applying use cases, where some types of functions are always curried (with an execution time overhead) and other types are not; Elm handles this by allways pseudo-currying in using hidden wrapper JS functions to apply wrapped functions to fixed numbers of arguments at a time as determined by the program context, but again at a const in performance (although perhaps less than Fables more direct multi-level currying).

Bucklescript de-curries as much as possible, however you can also force it in the type system explicitly by adding the annotation type of `[@bs]` to a function (type) definition, that enforces uncurrying at the type level and will even propagate in usage as expected to make sure accidental currying of it is not done (though you can still explicitly curry it by wrapping it in a curried type).  In most cases it de-curries very well and you never need to use `[@bs]` (the only real time I've used it is on DOM callback registrations with more than one argument to make sure I do not accidentally pass a curried version to one, never used it in 'user' code as of yet).

Robin Heggelund Hansen

unread,
Jan 23, 2017, 2:05:19 PM1/23/17
to Elm Discuss
I don't understand this. Elm currently has better code output than Babel and Typescript. Choosing Elm over those gives me faster applications (though, I've never needed more speed) as well as smaller bundles. An application written with React+Immutable+Moment, will have much more code than an equivalent Elm application, it will also be much slower unless you have steel discipline and are willing to write more code. Elm's compiler is also faster than both Babel and Typescript, and compiler speed will get *much* faster in the next release. In my experience, Elm is already better than Javascript in every conceivable way, and that's before taking static typing into account. True, I don't write games, but if I did I probably wouldn't do it in an immutable language due to garbage collector concerns. Depending on the game, I wouldn't even write it in a javascript environment due to the lack of threads.

Why would you want arrays instead of tagged-objects as the primary data-structure? Just because Bucklescript does it doesn't make it faster. Try benchmarking it yourself. I did. Depending on the browser, accessing and/or changing an array isn't necessarily faster than accessing/altering a javascript object.

Finally, many people today are using Elm in production. There isn't a general consensus amongst Elm's users that the language is too slow, outputs too much code or is slow to compile (a cold compile of my app takes ~9seconds. That is NOTHING compared to an equivalent typescript application I'm working on, and isn't noticed in practice because of incremental compiles). The problem with Elm (if you indeed could call it a problem) is the lack of features. Many people would like better asset management, which is why Evan is working on it. Personally, I would like some sort of reflection support, as well as nested record syntax. Actually, I would gladly take release of the local-storage module before any of those, and re-connect notifications in elm-websockets.

The short of it is, the problems you're mentioning in this thread aren't problems for the vast majority of Elm developers. Had it been, they would've been addressed. Personally, I'm glad Evan isn't focused on the things you've proposed. There are other things I want that I'm glad is having a higher priority.

Richard Feldman

unread,
Jan 23, 2017, 2:11:29 PM1/23/17
to Elm Discuss
I think everyone hangs around this and other Elm discussion forums because we love the Elm concept of a simple functional language that promises to eliminate the need to deal with JavaScript; but unfortunately, due to the compiler shortcomings, that promise is not delivered other than for the most basic of programs.

Emphasis mine. We have 80,000 lines of Elm code in production and we're so happy with it, we shout it from the rooftops. Plenty of others have written blog posts about their success stories using Elm on large applications. You are not oblivious to this.

Please feel free to criticize the language, but it's not cool to willfully misrepresent reality—even if only for hyperbolic effect. Beginners read these forums, and they will mistake what you've said here for truth.

Unfortunately and realistically, there seems to be only one major contributor to the Elm compiler and build system - Even himself - and he is under increasing pressure to do more timely updates in a variety of areas, not only as to code efficiency.  Also, the plan as proposed above requires changes in at least two major parts of the compiler:  the AST code builder and the back end Code Generator, so either one person needs to do both or there will be co-ordination involved.  This work would precede any other necessary work on further compiler optimization passes a la BuckleScript.

It might be useful to know that your priorities are wildly different than 99% of Elm programmers I've met. To nearly everyone outside this thread, Elm's current compiler is the language's biggest selling point, and performance of the compiled code is not a concern. I understand that for you it is a big concern, but you should know that you are an outlier in this regard.

I think if you are hoping these compiler optimizations will jump the priority queue, or that work on Elm's compiler will go from single-threaded (Evan) to a distributed system (a very expensive change), I think you are likely to be disappointed on both counts.

In contrast, it seems like your priorities align very well with Bob's priorities for BuckleScript. If I were you, I'd just use BuckleScript. :)

GordonBGood

unread,
Jan 23, 2017, 8:27:47 PM1/23/17
to Elm Discuss
On Tuesday, 24 January 2017 02:05:19 UTC+7, Robin Heggelund Hansen wrote:
I don't understand this. Elm currently has better code output than Babel and Typescript. Choosing Elm over those gives me faster applications (though, I've never needed more speed) as well as smaller bundles. An application written with React+Immutable+Moment, will have much more code than an equivalent Elm application, it will also be much slower unless you have steel discipline and are willing to write more code. Elm's compiler is also faster than both Babel and Typescript, and compiler speed will get *much* faster in the next release. In my experience, Elm is already better than Javascript in every conceivable way, and that's before taking static typing into account. True, I don't write games, but if I did I probably wouldn't do it in an immutable language due to garbage collector concerns. Depending on the game, I wouldn't even write it in a javascript environment due to the lack of threads.

I'll agree that Elm produces better code than Babel (which is what Fable uses as a back end), but Typescript can be as close to JavaScript as you want it to be (not that I want to write in TypeScript as it is too non-functional and too much like JavaScript with typing).  Choosing Elm gives code that is somewhat faster than Babel generated code, but not TypeScript (if one is willing to work at it) but definitely is much easier to develop.  I agree that I don't want to touch React/Immutable/Moment.  Again as you say, Elm's current compiler is also faster than Babel (maybe TypeScript; as I say I really don't want to go there for large projects), but as per OvermindDL1's comments, it seems that for large projects it takes a long time to link together the pieces, but that is being worked on.

I'm not so much into writing games, and as you say the JavaScript environment isn't multi-threaded so there are limitations there:  I am into heavy duty math computations (for which not being multi-threaded is also a limitation); however, I expect in the next year or two that the entire client code environment will make some radical shifts with wasm so these things are possible, and hopefully Elm will have evolved to the point where writing a wasm back end will be fairly easy.  I regard all the development we are doing now as prototyping for the near future.  That said, what we produce is useful immediately.  I have faith that Even's work in "asset management" will lead to faster compile times before I need them, so that isn't a problem.  But the fact remains that Elm's code output is several times slower than that of BuckleScript which does have an impact for many uses, just not for you uses where you find its speed good enough.

As to games and heavy duty math, sure there are better programming environments for that but it is quite amazing what one can do even now with the fast browser engines available to us currently:  although limited by being single threaded, BuckleScripts's JS output can come close to about only two times slower than highly optimized C++ code when both are written single-threaded.
 
Why would you want arrays instead of tagged-objects as the primary data-structure? Just because Bucklescript does it doesn't make it faster. Try benchmarking it yourself. I did. Depending on the browser, accessing and/or changing an array isn't necessarily faster than accessing/altering a javascript object.

The reason is that BuckleScript proves that Arrays are faster than "string" tagged objects and I have tried benchmarking it myself.  In fact, I have gone further and manually substituted the use of Arrays rather than the string tagged objects in the generated Elm code to show that is the reason.  The problem isn't so much the use of Array's versus objects/records, but the string tags, which as the Elm JS output doesn't preserve type information except by these tags, are continuously requiring string processing to determine the type of object at run time.  Elimination of these strings by using the type information the compiler already has would greatly speed things even if objects are used, with the further advantage of Arrays being that their indices are numeric for slightly less processing (depending on the browser engine used).

This becomes readily seen the more functional the code, with the use of tuples (tagged objects), lots of records (tagged objects), lists (nested tagged objects) and so on, and these passed as (essentially untyped) arguments across functions.
 
Finally, many people today are using Elm in production. There isn't a general consensus amongst Elm's users that the language is too slow, outputs too much code or is slow to compile (a cold compile of my app takes ~9seconds. That is NOTHING compared to an equivalent typescript application I'm working on, and isn't noticed in practice because of incremental compiles). The problem with Elm (if you indeed could call it a problem) is the lack of features. Many people would like better asset management, which is why Evan is working on it. Personally, I would like some sort of reflection support, as well as nested record syntax. Actually, I would gladly take release of the local-storage module before any of those, and re-connect notifications in elm-websockets.

Yes, Elm is fast enough for many purposes.  Tree shaking programs such as the Google Compiler reduce code size.  Compile time is currently adequate for many uses, although slow compared to something like OCaml/BuckScript that has been expressly optimized for compile speed.

Some of your wish list is likely more library related than core Elm; and I have no objections to some extensions of even core Elm as long as they don't impact the ease of use/learning etc. of the core language.  However, we appreciate that almost everyone has some pet features from another programming environment that they want to see brought to Elm, and that Evan must carefully weigh the merits of each against his overall Elm goals.

The short of it is, the problems you're mentioning in this thread aren't problems for the vast majority of Elm developers. Had it been, they would've been addressed. Personally, I'm glad Evan isn't focused on the things you've proposed. There are other things I want that I'm glad is having a higher priority.

Different strokes...

I am not really promoting the use of OCaml/BuckleScript which has its own warts (currently) although Hongbo has done an incredible job with it:  I dislike Ocaml except as it closely resembles Elm/Haskell/F#.  My point is that for those of us that need speed - at least a few that have chimed in on this thread - if it isn't addressed with the Elm compiler then Elm might migrate to a front end to OCaml which would schism development efforts and possibly hurt the language.

My other point is that the sooner it is done the easier it will be.

And it doesn't affect those that find current Elm speed adequate - all that is needed is enough contributors and co-ordination between the different developments.

GordonBGood

unread,
Jan 23, 2017, 8:41:55 PM1/23/17
to Elm Discuss
On Tuesday, 24 January 2017 01:26:49 UTC+7, OvermindDL1 wrote:
Bucklescript de-curries as much as possible, however you can also force it in the type system explicitly by adding the annotation type of `[@bs]` to a function (type) definition, that enforces uncurrying at the type level and will even propagate in usage as expected to make sure accidental currying of it is not done (though you can still explicitly curry it by wrapping it in a curried type).  In most cases it de-curries very well and you never need to use `[@bs]` (the only real time I've used it is on DOM callback registrations with more than one argument to make sure I do not accidentally pass a curried version to one, never used it in 'user' code as of yet).

Yes, I've used the {@bs} notation, usually on very low-level code to force no currying (or passing of any arguments); as you say BuckleScript is very good of determining an appropriate use of currying.  If one were to use an Elm front end to OCaml/BuckleScript, it would be nice to add the ability to inject these macro-type annotations into the code, probably as a specially formed comment, so that in effect we could write all the Native code modules in the high level environment, which works very well in BuckleScript.

One thing that BuckleScript does by default that breaks type safety is not do array bounds checks, but that wouldn't be a problem for an Elm front end as Elm does not use (mutable) arrays directly.
It is loading more messages.
0 new messages