--
You received this message because you are subscribed to the Google Groups "elixir-lang-core" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elixir-lang-co...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/f0da2df2-432e-423c-a02b-27d8b916a0ecn%40googlegroups.com.
I've been running my tests on Elixir v1.13.1 built for OTP24 with
OTP 24.1.2.
When decompiling the resulting BEAM bytecode, the anonymous
functions are still visible.
I will do some benchmarks to see how the resulting performance
is. Maybe the JIT will do something which is not visible in the
BEAM bytecode.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/CAGnRm4%2Bu8RTb8sMAJyGiuw6%2BgGgyuVZVxjpFad9M%2BbEgYrwkbg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/b02049e0-8d86-a7b4-e8e0-396bb9ecd4f0%40resilia.nl.
I have run some benchmarks (comparing OTP23 with JIT-enabled
OTP24).
Full results here:
https://github.com/Qqwy/elixir-test-benchmrking_then/
It compares, in a situation where no tail recursion optimization
is possible, `Kernel.then/2` vs. writing the same code manually
vs. using `Kernel.then/2` with `@compile :inline`.
A brief summary of the results:
- OTP24 is able to get roughly twice as many iterations per
second as OTP23. However:
- On OTP24:
- using `Kernel.then/2` requires (when tail recursion is not
possible) 2.5x the memory of the other two variants.
- using `Kernel.then/2`is roughly 30% slower than the other two
variants.
- On OTP23:
- all three techniques use the same amount of memory.
- using `Kernel.then/2`is roughly 8% slower than the other two
variants.
Strange...
I also took a look at the disassembled code using :erts_debug.df
as you suggested.
Details here:
https://github.com/Qqwy/elixir-test-benchmrking_then/#looking-at-the-disassembled-code
(Note that under OTP24 the *.dis-files only contained 1-5 empty
lines, so the output is from OTP23. Should I file a bug with the
OTP team for this?)
It seems that also during loading, no optimization of
immediately-called anonymous functions is taking place.
Above benchmarks seem to support this fact, although the results
w.r.t. memory usage and the difference in slowdown vs OTP23/24
seems very odd to me.
How to continue?
~Marten/Qqwy
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/CAGnRm4KqHRqTEisWYLNi7n2UQzP5XtVMUYkLbkHyiVyjcvKFOg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/84dbf8a8-4f9f-5aa7-efc3-1658e097a8c5%40resilia.nl.
Yes, across benchmark runs the memory measurements are the same.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/CAGnRm4Jod05LOG61Wf08gkNR0FTSDx8W4gWSZdr96k7BZ94UrQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/178e943c-84c3-31c4-9e41-903ad2f8da32%40resilia.nl.
No worries, thanks a lot for your guidance in this matter! ^_^
I will try to come up with some other, more 'real-world'-like
examples to double-check whether the benchmark's results apply
only on quick snippets or across the board.
Do you happen to know if there is any way to inspect the result of
the JIT-pass?
You received this message because you are subscribed to a topic in the Google Groups "elixir-lang-core" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elixir-lang-core/15sjCMZyqFk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elixir-lang-co...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/CAGnRm4LqMq6oLpncmWethkon3Xpbp%3DTQAw8kOm96sU%2Bf3qvj0Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/18414c2c-bd54-c019-02ec-f95a0c3d94a7%40resilia.nl.
As a follow-up, I wrote another benchmark, and ran it 'properly' this time, by running it on a custom-built Erlang/OTP v 24.2.0 which supports both the JIT and EMU `emu_flavor`s.
This updated benchmark compares three different implementations
of a GenServer `handle_cast` callback, which seemed like a more
realistic scenario to me.
See here for the three implementations
https://github.com/Qqwy/elixir-benchmarking_then_genserver/blob/main/lib/implementation.ex#L2-L34
The results in this example are that `Then`, `ThenInlined` and
`Manual` are similarly efficient, and all take the same amount of
memory.
So I guess that at least when `Kernel.then/2` is in a
tail-recursion position (which is probably the common case), it
will be optimized well by the Erlang compiler. :-)
For whom wants to dig deeper into this benchmark themselves, see:
https://github.com/Qqwy/elixir-benchmarking_then_genserver
~Marten/Qqwy
To view this discussion on the web visit https://groups.google.com/d/msgid/elixir-lang-core/CAGnRm4JA9B78W1v%2B3UpU%2BhiX1WME7cYAQdpKqvDoEL5x3uonZA%40mail.gmail.com.