Virtual DOM Benchmark

2,106 views
Skip to first unread message

ryan....@panesofglass.org

unread,
Oct 27, 2014, 10:44:18 PM10/27/14
to mith...@googlegroups.com
A colleague sent this link to me earlier, and I was quite surprised to see how poorly Mithril compared. I believe this is a result either of a bias in the benchmark or bad mithril code and thought you might want to know about and fix it if it is possible.

https://localvoid.github.io/vdom-benchmark/

Cheers,
Ryan

Leo Horie

unread,
Oct 28, 2014, 10:04:49 AM10/28/14
to mith...@googlegroups.com, ryan....@panesofglass.org
From a very quick glance, it looks like the Mithril benchmark isn't using keys correctly


This line uses `n.key` where n is a node. In Mithril, `key` is in `attrs`, so presumably that should be `n.attrs.key`.

I'd need to go install Dart and figure out how to get this to build properly on my machine to try it out before sending a PR.

ryan....@panesofglass.org

unread,
Oct 28, 2014, 10:17:10 AM10/28/14
to mith...@googlegroups.com, ryan....@panesofglass.org
On Tuesday, October 28, 2014 9:04:49 AM UTC-5, Leo Horie wrote:
> I'd need to go install Dart and figure out how to get this to build properly on my machine to try it out before sending a PR.

Try the Chrome Dev Editor: https://chrome.google.com/webstore/detail/chrome-dev-editor-develop/pnoffddplpippgcfjdhbmhkofpnaalpg

a...@zenexity.com

unread,
Oct 28, 2014, 11:04:52 AM10/28/14
to mith...@googlegroups.com, ryan....@panesofglass.org
I think node.key is fine; The node type is defined here: https://github.com/localvoid/vdom-benchmark/blob/master/lib/generator.dart#L6

I couldn't find any obvious issues with the benchmark; I'm also curious because in my own testings, mithril always came up on top of react even in synchronous mode.

a...@zenexity.com

unread,
Oct 28, 2014, 6:41:40 PM10/28/14
to mith...@googlegroups.com, ryan....@panesofglass.org, a...@zenexity.com
I played with the benchmark a bit locally, it seems pretty legit.

Specifically, I looked at React and mithril. If the number of spans is lower, (for instance, 100 instead of 5000) mithril is a bit faster, however, for a big number of items, yes, React is faster. It's quite hard to compare the two very different approaches but here's my interpretation:

- mithril is much lighter and the infrastructure code generally runs faster than React's; So for a small number of node changes, this will make a bigger difference.

- For a moderate number of operations, appendChild (used by mithril) is fine; However, for a big, wholesale change (like reversing a 5000 items array), innerHTML (used by react) is more efficient somehow. This used to a no brainer a few years back but I thought modern browsers were very efficient using both APIs.

- The benchmark is quite kind to React. It uses divs and spans which are quite lightweight in React's world but if we were to use options for instance it might become much slower (See the special virtual implementation here: https://github.com/facebook/react/blob/master/src/browser/ui/dom/components/ReactDOMInput.js). What's more, idiomatic React tends to use a lot of sub-components and these are slower than regular virtual dom nodes (automatic 'this' binding, bookeeping code, etc)

Leo Horie

unread,
Oct 29, 2014, 11:25:10 AM10/29/14
to mith...@googlegroups.com, ryan....@panesofglass.org, a...@zenexity.com
@agl

Interesting. Thanks for the insight.

I'm surprised they would choose to use innerHTML to implement an optimization. I once had a problem w/ Angular where due to one of its obscure caveats, the framework was inadvertedly re-initializing jQuery UI draggable for every item in a list every time a user did an action, and that made the UI noticeably slow. Using innerHTML would run into the same class of issues since you can't magically keep 3rd party libraries alive across a innerHTML redraw and the cost of re-initializing them far outweights the difference between appendChild and innerHTML. Might just be my bad personal experience with the Angular issue, but personally, I wouldn't want a framework trying to be cute with this stuff :/


Barney Carroll

unread,
Oct 29, 2014, 11:57:05 AM10/29/14
to Alexandre Galays, mith...@googlegroups.com, ryan....@panesofglass.org
On 28 October 2014 22:41, <a...@zenexity.com> wrote:
- For a moderate number of operations, appendChild (used by mithril) is fine; However, for a big, wholesale change (like reversing a 5000 items array), innerHTML (used by react) is more efficient somehow. This used to a no brainer a few years back but I thought modern browsers were very efficient using both APIs.

That's really interesting. Do you think you could set up a meaningful jsperf test case for this? Obviously only a maniac would try to fork internal rendering methods based on an assumed probability of higher performance, but it'd be interesting to see how and where this is of any benefit.
 
- The benchmark is quite kind to React. It uses divs and spans which are quite lightweight in React's world but if we were to use options for instance it might become much slower (See the special virtual implementation here: https://github.com/facebook/react/blob/master/src/browser/ui/dom/components/ReactDOMInput.js).

TBH DOM elements with dynamic behaviour (form elements, tables) can easily be more efficient if you write the specific behaviour you want in JS/CSS instead of writing interfaces through to the truckload of edge-case-sensitive stuff that gets (inconsistently) packed into them natively. Steven Wittens wrote an excellent article on how increasingly bloated DOM elements are here [1]. Performance aside, <select>s are inconsistent and prone to bad usability (hence select2, chosen), tables are inflexible monstrosities (witness Google Docs Spreadsheet, whose whole purpose is to run a table in html and implements this with … canvas), and web components are far too little too late (as James Long eloquently describes here [2]).

Form elements beyond the most simple input[type=text] in modern web apps are only marginally better than jQuery UI widgets – the amount of headaches people have writing middleware to keep form states persistent with their own models, they may as well be writing their own checkboxes from scratch.
 
What's more, idiomatic React tends to use a lot of sub-components and these are slower than regular virtual dom nodes (automatic 'this' binding, bookeeping code, etc)

True, but idiomatic Mithril has yet to throw up examples of the kind of components the React community is awash with. I agree that Mithril points the way to much leaner, more performant and less API-bloated code, but it'd be nice to have more honest & interesting comparisons to add to the occlusion culling scrolling list.

a...@zenexity.com

unread,
Oct 29, 2014, 4:30:48 PM10/29/14
to mith...@googlegroups.com, ryan....@panesofglass.org, a...@zenexity.com
angular.js has insane complexity; Their select directive is the size of the mithril codebase, and it's not nice code, it's very entangled mutable code; So not surprised they have hard to reason about bugs. The version 1.x of angular will troll many maintaining developers for years to come.

I see what you mean about innerHTML; but they add all these data-reactid for tracking purpose.
The React code is quite hard to follow but it appears they always use innerHTML for a newly rendered component (so half the cases in that benchmark!), after that, it seems to be close to what mithril does with setAttribute, insertChild, etc) This hybrid approach helps making their codebase complicated.
Perhaps the use of innerHTML is a direct consequence of them wanting to support React.renderComponentToString.

By the way, does mithril support rendering to a DocumentFragment root that is later attached? This would probably speed things up quite a bit for the initial rendering, where these numerous appendChild are no longer done directly in the live document. It would also be great with hybrid architectures, where the server renders some base page and mithril complement it (the client fetches a page via ajax, create a document fragment out of it, then decorate it with mithril before updating the document).

Leo Horie

unread,
Oct 30, 2014, 11:53:26 AM10/30/14
to mith...@googlegroups.com, ryan....@panesofglass.org, a...@zenexity.com
The test suite actually renders things to detached elements, so I don't see why you wouldn't be able to do the same with regular code. The only caveats to building a detached DOM tree is that you don't get correct values for things like offsetHeight if you're poking at the DOM from a config.


a...@zenexity.com

unread,
Nov 2, 2014, 6:57:54 AM11/2/14
to mith...@googlegroups.com, ryan....@panesofglass.org, a...@zenexity.com
After profiling the benchmark and the react/mithril codebase some more, here's a jsperf that illustrate the biggest difference between the two:

http://jsperf.com/innerhtml-vs-insertbefore-2

This explains why, in the benchmark, mithril does well for updates (except in a few cases where react has an optimized code path), but not creations from a blank state.

a...@zenexity.com

unread,
Nov 2, 2014, 4:20:24 PM11/2/14
to mith...@googlegroups.com, ryan....@panesofglass.org, a...@zenexity.com

Ah well, the mithril build function is too big to see where the majority of time is spent with Chrome profiler :(

maciek lesiczka

unread,
Nov 4, 2014, 12:58:06 PM11/4/14
to mith...@googlegroups.com, ryan....@panesofglass.org, a...@zenexity.com
Hi everyone,
I'm going to optimize (or replace totally, step by step) AngularJS in my project and lately I made some similar tests on IE, you can find results here

Leo Horie

unread,
Nov 4, 2014, 2:33:31 PM11/4/14
to mith...@googlegroups.com
Hmm interesting that React keeps coming out faster than Mithril. I'll investigate when I get a chance.

Leo Horie

unread,
Nov 5, 2014, 10:08:10 AM11/5/14
to mith...@googlegroups.com
I did some poking around last night, and turned out that some of the recent bug fixes and pull requests added a bunch of unneeded costs, and rendering each element was taking some 16 function calls. I've refactored the code so now a creation takes only 1 function call and saw a pretty significant improvement (560ms -> 80ms in my little test). There's still some other low hanging fruits to fix

Lawrence Dol

unread,
Nov 5, 2014, 12:40:42 PM11/5/14
to mith...@googlegroups.com
Nice -- well done.

a...@zenexity.com

unread,
Nov 5, 2014, 2:03:28 PM11/5/14
to mith...@googlegroups.com
Thank you Leo for looking into this.
Unfortunately, I just ran that benchmark again with mithril/next and using both Chrome and Firefox, mithril is still behind React more often than not. mithril is almost always faster for updates but almost always slower for creations.

I run each test 10 times instead of 3 for better averages.

http://i171.photobucket.com/albums/u320/boubiyeah/ScreenShot2014-11-05at200140_zps04a243d7.png

Leo Horie

unread,
Nov 5, 2014, 10:24:58 PM11/5/14
to mith...@googlegroups.com, a...@zenexity.com
@acl FYI, I still want to do a few more tweaks. 

What you mentioned before about React using innerHTML for creation was intriguing. I could look into that idea (though that will be a bigger effort than just fixing the sub-optimal lines of code.)

Barney Carroll

unread,
Nov 6, 2014, 2:37:25 AM11/6/14
to Leo Horie, mith...@googlegroups.com, Alexandre Galays
The performance improvements in don;t seem to dent the generic perf tests: see before and after. It'd be nice to take note of every intended performance tweak so we can start validating and comparing received wisdom (where you'd otherwise have to go on a hunch, ie should I reduce the number of function calls or instantiate less variables?). For instance, I notice your code is full of `call`s, something I try to avoid using casually as I vaguely remember seeing that applying this is a significantly expensive procedure — but then it's pointless arguing the toss with me because I don't really remember by how much, etc.
On 6 November 2014 03:24, Leo Horie <leoh...@gmail.com> wrote:
@acl FYI, I still want to do a few more tweaks. 

What you mentioned before about React using innerHTML for creation was intriguing. I could look into that idea (though that will be a bigger effort than just fixing the sub-optimal lines of code.)

--
You received this message because you are subscribed to the Google Groups "mithril" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mithriljs+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

a...@zenexity.com

unread,
Nov 6, 2014, 7:11:25 AM11/6/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
That's a nice Jsperf. It shows there is probably no need to explore the innerHTML road (yet?).
Does mithril's recursive build function first add children before adding their parent to the document? This is what is done in that vanilla js table example and could make a massive difference.

a...@zenexity.com

unread,
Nov 6, 2014, 7:43:53 AM11/6/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
I added another test to the JsPerf: http://jsperf.com/rendering-dom-in-mithril/6 to see the difference between rooted and unrooted elements; while it makes a decent difference in Firefox, Chrome does not care at all, so there must be something else.

Nowadays, VMs can call many functions with barely any penalty (There are tons of function calls in react after all). if mithril's build function were to be split in a few smaller functions, it would help in finding where most of the time is spent.

Barney Carroll

unread,
Nov 6, 2014, 8:09:58 AM11/6/14
to Alexandre Galays, mith...@googlegroups.com, Leo Horie
That's intuitive. The huge nested if statements make it difficult to get conclusive data on where the stuff is happening, which would at least be a start in doing internal analysis. Fork it?
On 6 November 2014 12:43, <a...@zenexity.com> wrote:
I added another test to the JsPerf:  http://jsperf.com/rendering-dom-in-mithril/6 to see the difference between rooted and unrooted elements; while it makes a decent difference in Firefox, Chrome does not care at all, so there must be something else.

Nowadays, VMs can call many functions with barely any penalty (There are tons of function calls in react after all). if mithril's build function were to be split in a few smaller functions, it would help in finding where most of the time is spent.

Leo Horie

unread,
Nov 6, 2014, 4:07:04 PM11/6/14
to mith...@googlegroups.com, a...@zenexity.com, leoh...@gmail.com
If you're interested in helping doing perf analysis on the `build` function, what I've been doing is wrapping sections in `new function foo() { ... }` blocks and running a profiler.

There are 3 major sections each subdivided into a couple of subsections: 




So far my tests say that `flatten` and this loop ( https://github.com/lhorie/mithril.js/blob/next/mithril.js#L182-184 ) are two of the most expensive things in the code right now.

@Barney the `.call` calls are mostly type checking. One of the things that was causing significant difference in running time was that a recent PR had moved those into isObj/isArr/etc functions (each of which in turn called another helper function that then called {}.toString.call). While re-inlining them won't show much difference when compared against runnning native appendChild in a loop, it does seem to make a difference when comparing v0.1.22 and origin/next. I think comparing against native appendChild isn't very useful because the order of magnitude in difference between a naked appendChild vs the diff algorithm overhead shadows the deltas between the incremental perf tweaks.

Another thing I still need to double check is to see if I'm inadvertedly touching the DOM when I shouldn't be.


Barney Carroll

unread,
Nov 6, 2014, 5:02:50 PM11/6/14
to Leo Horie, mith...@googlegroups.com, a...@zenexity.com
Thanks for the analysis Leo,this is really helpful. It brings to light the often all too subjective eye with which we appreciate 'numbers or GTFO'.


On Thursday, 6 November 2014, Leo Horie <leoh...@gmail.com> wrote:
If you're interested in helping doing perf analysis on the `build` function, what I've been doing is wrapping sections in `new function foo() { ... }` blocks and running a profiler.

I'm wondering how easy it would be to use something like sweet.js to keep verbose scoping functions almost everywhere and use that for profiling tests,while compiling to a build that eschewed these closures. Automatic performance reports alongside acceptance tests, so you can just hack away and then look at standardised performance metrics after the fact. I don't know of anything similar in the Javascript world, and I'm sure it'd be a massive project in its own right... But something that continuously surprises me is how seriously the JS community treats performance and yet how completely and inconsistently ad-hoc these perf tests are (not to mention often misleading in focus our implementation, or dealing with total straw man use cases like the million rotating circles). If we could get continuous integration hooked into perf tests that ran against any number of scenarios, fallacious or otherwise, we could at least pick our battles and get stats on the bits that interested us. For example,you could be struggling to build in some new functionality,all the while patching in hotfixes for random edge cases, and then later look back and go OMG performance on colossal lists rearrangement really dropped off (or improved!) between build 1.2.4 and 1.2.5 – what might have done that?

@Barney the `.call` … when comparing v0.1.22 and origin/next […] I think comparing against native appendChild isn't very useful because the order of magnitude in difference between a naked appendChild vs the diff algorithm overhead shadows the deltas between the incremental perf tweaks.

Yeah comparing any qualified task to the logicless implementation of the same is a straw man. But comparing commit n to commit n+1 is a massive pain few people are likely to step up and take on as a manual effort. Who's going to even bother book-keeping, or even chasing up and comparing these things? I suspect you will always have better things to do (I certainly hope you do ;). But I digress.
 
Another thing I still need to double check is to see if I'm inadvertedly touching the DOM when I shouldn't be.

Mutation Observers. Tests for this would be (relatively) easy to set up. ie: When I change m.config for parentElement and childElementX has key X and childElementX moves position in the list, I would expect subTreeNodeRemovalOnParentElement() to be false. Can't say this is something I'm going to be able to justify the time for in the near future, but it'd be good to discuss what the tests should be, even if it's pseudocode no-one can implement right now.


--

a...@zenexity.com

unread,
Nov 8, 2014, 9:55:50 AM11/8/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
What I found so far:

- In Firefox, those concats are indeed the killers; Just replacing that second loop with an inner loop that pushes to the existing Array put the perfs on par with React or even better (got to go with the imperative approach sometimes). That's about a 3x boost easily. The impact on Chrome isn't nearly as much! :(
These numerous concats caused a lot of GC stress, so we now have steadier results.

- Deconstructing 'build' into many functions doesn't hurt performances, perhaps on the contrary. One possible interpretation is that Chrome is having troubles optimizing (JIT) mithril while it optimizes react just fine. I think we want smaller functions, and less dynamic forms (e.g reassigning unrelated objects, etc). Using new function() {}, is intrusive and skew the results (many short lived objects are created).

a...@zenexity.com

unread,
Nov 16, 2014, 1:01:23 PM11/16/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
Played with the benchmark some more.

- It's definitely Chrome not being able to optimize some of Mithril's hot functions. In general, chrome is not slower than FF, so given the huge difference between the two here, it must be chrome doing something funky. http://i171.photobucket.com/albums/u320/boubiyeah/ScreenShot2014-11-15at143301_zps7a48051f.png

- I split the build function into buildObj, buildArray, buildTextNode, then split buildObj some more into buildNewObj and updateObj, etc. When smaller functions are introduced, chrome becomes a bit faster, as it can optimize a bigger percentage of the code base. I also moved the content of the setAttributes try-catch block into its own function, which again, is supposed to help chrome a bit.

- Some functions are very hard for Chrome to optimize, like the new buildNewObj. Normally, chrome can not optimize a function permanently when the function is too polymorphic; So I rewrote some part of the codebase so that for instance, a cache is never just an object with a 'nodes' property, but always a fully-fledged cache object ({tag, children, attrs, nodes}). I also made sure 'children' was never a plain string, always an Array.

Still, Chrome refuse to optimize the hottest function! At this point, I believe chrome is at least as responsible for the perfs issues than the mithril coding style itself (which is fine by me), given how FF was fast no matter what I did. The V8 optimizer seems very fragile.

What do think?

Leo Horie

unread,
Nov 17, 2014, 1:01:52 PM11/17/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
I was running into similar results when I was playing with breaking build into functions over the weekend, but I did get worse results in Firefox when I did that. I just had this hunch that the firefox slowdown might be a red herring (profiler potentially measuring itself), so I'm going to test that hypothesis later and if it does turn out to be a profiler overhead, then I think we can start landing Chrome-specific optimizations.

Bear in mind that buildObject is expensive because it calls both createElement and appendChild/insertBefore, both of which are expensive.

Another optimization that I have in a separate branch right now is avoiding setting `key` attributes from the DOM, which would remove 5000 element.setAttribute calls (plus 5000 function calls) per test in the vdom benchmark. I'm not really sure if I should merge that in, because then you would lose the ability of seeing keys in generated source for debugging. The documentation doesn't specifically say keys are part of the DOM, so technically it's not a breaking change, but that's debatable.

Barney Carroll

unread,
Nov 17, 2014, 1:26:34 PM11/17/14
to Leo Horie, mith...@googlegroups.com, a...@zenexity.com
Personally I'd be happier if key wasn't output to the DOM. It looks like a short-sighted get-out-of-jail hack for when you want to share references with external plugins, which would cause all sorts of problems if used extensively.

Perhaps more to the point, it spits loads of Mithril-specific data into the document in a non-DOM-standard way, which is IMO ugly and mildly offends my dormant standardista streak.

I wouldn't qualify this as breaking, since accessing it from the built DOM isn't documented at all.

As a side note, the search part of the app I'm building maps lists to templates with non-trivial contents – nested elements with conditional subtrees and scoped event handlers – and rendering hundreds (>1000) of these locks up Chrome for over a second but performs without noticeable stutter on Firefox. Incidentally, this is a situation where preserving key reference beyond the diff-patch would be a nice addition, since undoing a restrictive search or coming back to the search view from another route effectively re-instates previously known elements, but currently everything is regenerated from scratch (apart from the incidental number of results which persist between changes in search criteria, which is an edge case in practical terms).

I can't help worrying a tiny bit that these aggressive performance optimizations are often comparing browser X version A to browser Y version B on overly simplistic scenarios (I've just drawn 10,000 unstyled empty DIVs from scratch, wahey). Of course it's easier to apply scientific methods and reason about the results in these locked-down scenarios, and I'm not about to hand over my source code and say "make this work better for user journey X", but I think generic and intuitive performance strategies that show some benefit across the board (even if, for example, latest Firefox is bizarrely slower than everything else) should take precedence.
--

Lawrence Dol

unread,
Nov 17, 2014, 2:04:19 PM11/17/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
> Personally I'd be happier if key wasn't output to the DOM.

I agree with Barney, but it would be nice if it could trivially be made to output for debugging purposes (even if that were a commented line in the unminimized source; like Barney I always have that ready to use in testing at the flick of a (config) switch). I still have an open ticket with a strange bug when I tried to use keys whereby the order of the elements emitted was bizarrely affected just by adding the key. However, on balance, the key has nothing to do with the DOM element, and technically violates the HTML5 spec if emitted as an attribute.

> I can't help worrying a tiny bit that these aggressive performance optimizations ... but I think generic and intuitive performance strategies that show some benefit across the board ... should take precedence.

Fully agree. If the reason can be clearly understood, and it's a sub-optimal approach with a defensibly better alternative, then sure; but having esoteric performance hacks in the code to work around failings in this year's browser X is a bad idea in the long term.

Leo Horie

unread,
Nov 17, 2014, 2:36:16 PM11/17/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
If it's only for debugging purposes, you can certainly just do m("div", {key: n, test: n}) and "test" would appear as an attribute in the DOM.

Re: Chrome optimizations, I don't think it's accurate to say that we're hacking around browser weaknesses (remember, Chrome is still faster than Firefox even without the extra optimizations). It's more like we're trying to take advantage of new JIT features as the compiler becomes smarter (e.g. simplifying the amount of polymorphism in a function signature is pretty much a textbook compiler optimization thing: the gist is to take a dynamic data structure and transform it into a static struct to reduce the amount of low-level overheads). Maybe JIT compilers will eventually become sufficiently smart that we don't need to do these optimizations, but breaking `build` into smaller units would also be a good idea from a maintenance perspective.

I agree about pursuing higher impact strategies first (and in fact, that's where the majority of the efforts have been so far, e.g. one of the landed optimizations was on `flatten`, which runs on every array. Same for the concat-to-push PR. Removing keys from the DOM would affect every list that uses keys. I'm aware of a whole lot of micro-optimizations that are possible to make Chrome happier (e.g. wrap for-in loops in functions), but I don't expect those to improve things very significantly, so I'm not as concerned about them. There are also classes of optimizations that are extremely time consuming (e.g. change build to be non-recursive), and because of the effort involved, I'm also not as interested in them.

g00f...@gmail.com

unread,
Nov 26, 2014, 2:02:07 AM11/26/14
to mith...@googlegroups.com, leoh...@gmail.com, a...@zenexity.com
This may be a good read in order to optimize the code for V8
https://github.com/petkaantonov/bluebird/wiki/Optimization-killers
Reply all
Reply to author
Forward
0 new messages