

--
You received this message because you are subscribed to the Google Groups "Eiffel Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eiffel-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eiffel-users/1742ff2d-d237-4833-a4bc-5eeac6c83e9en%40googlegroups.com.
I thought Eiffel was supposed to generate C code and compile that to create the executable - it makes no sense to me that a compiled app would run slower than an interpreted one......if it does the same thing. A purely numerical problem like computing primes does not really need the overhead of an OO language.
hi Ulrich
I have tested your hypothesis by removing the OO overhead. I did
this by modifiying PRIMES_BENCHMARK_APP
to run the benchmark a second time but this time reusing the same
instance of PRIME_NUMBER_SIEVE
and turning garbage collection off. As you can see it made very
little difference to the results.
TO_SPECIAL implementation Passes: 1328, Time: 5.002, Avg: 0.004, Limit: 1000000, Count: 78498, Valid: True TO_SPECIAL implementation without GC overhead Passes: 1369, Time: 5.003, Avg: 0.004, Limit: 1000000, Count: 78498, Valid: True
I am curious to know how Ada would perform in this test. I think
I will go on an Ada group and ask an Ada guy if he would like to
try.
-- Finnian
-- SmartDevelopersUseUnderScoresInTheirIdentifiersBecause_it_is_much_easier_to_read
Finnian Reilly <frei...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "Eiffel Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eiffel-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eiffel-users/1742ff2d-d237-4833-a4bc-5eeac6c83e9en%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eiffel-users/1617131388.643745245%40f302.i.mail.ru.
In theory, such a highly optimized bit vector class can be added to the Eiffel library to improve the mentioned overhead of 7% compared to C++ version. In practice, it’s unclear how many applications are using bit vectors.

Finnian Reilly <frei...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "Eiffel Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eiffel-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eiffel-users/f29abbac-a949-45f4-a4bd-d06656beb0fen%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eiffel-users/1617215011.812603511%40f496.i.mail.ru.
Hi Finnian,I’ll send my version (or, more precisely, fragments of it) to you privately because I did not use EL_* classes.
It turns out that the inlining factor plays an essential role to get the best results.
This is a concept I am not sure I fully understand. I always
assumed it to be a C compiler optimization where function calls
are replaced with the actual code, but I haven't as yet been able
to identify exactly which gcc switch it corresponds to. I looked
in config.sh
But this prompts the question, could not the C++ program also
benefit from inlining, and would it not be cheating if we do not
also apply it the C++ example?
My version (it differs from the C++ version by very efficient bit counting algorithm, but this affects only the final step,
not the main loop) with the factor set to 15
outperforms the C++ version on my machine.
That's impressive. You should go to the comments of Dave Garage
video and brag a bit. It would be good publicity for Eiffel.
The results can (and do) significantly differ with lower or higher values of the factor that is expected with such micro-benchmarks. The results can be made more predictable and less dependent on the factor value by inlining two small features (one that clears a bit and one that returns a bit) by hand.Note that everything is pure Eiffel code without any external features, special classes and other heavy constructs. BTW,
I’m using an up-to-date version of EiffelStudio, not the one from 2016.
But will it in theory still work on the old compiler with some modifications?
Finnian
This is a concept I am not sure I fully understand. I always assumed it to be a C compiler optimization where function calls are replaced with the actual code, but I haven't as yet been able to identify exactly which gcc switch it corresponds to. I looked in config.sh
I think I have answered my own question by RTFM at eiffel.org :-)
Inlining, Inlining Size: enables inlining on Eiffel features that can be inlined, i.e. whose size is less or equal to the specified size in the combo box. The size value given in parameter corresponds to the number of instructions as seen by the Eiffel compiler (for example a := b.f corresponds to 2 instructions). The inlining is very powerful since it can inline a function in all your Eiffel code, without scope limitation as found in C or C++ compilers. (C code generation mode only)This seems to suggest that inlining modifies the C-code generation and infact is not some instruction to the C compiler. Wow! you learn something new everyday.
I would love to see an article explaining inlining in detail,
with graphs etc.
I remember once setting inlining to 5 on a protein folding analyzer project to spectacular effect.
Finnian
-- SmartDevelopersUseUnderScoresInTheirIdentifiersBecause_it_is_much_easier_to_read
--
You received this message because you are subscribed to the Google Groups "Eiffel Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eiffel-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eiffel-users/3eed7322-b42f-81b8-0ea9-5e2d80cddc8c%40eiffel-loop.com.
It turns out that the inlining factor plays an essential role to get the best results. My version (it differs from the C++ version by very efficient bit counting algorithm, but this affects only the final step, not the main loop) with the factor set to 15 outperforms the C++ version on my machine.
I don't doubt your benchmarks, but I have am having trouble
understanding the theory of why it is faster than an unpacked
SPECIAL array. Looking at your `clear' routine, i see that it
first has to read the existing value at `index' in order to set a
new value, whereas with an unpacked array you don't have this
additional read. So in theory it should be slower, not faster. Do
your results depend on some clever C compiler optimizations? How
does it work?
feature -- Modification
clear (a: like area; i: INTEGER)
-- Put `False` at `i`-th bit of `a`.
require
valid_bit_index: 0 <= i and i < sieve_size
local
index: INTEGER
v: like area.item
do
index := i |>> index_shift
v := a [index] & (one |<< (i & index_mask)).bit_not
a [index] := v
ensure
cleared: not item (a, i)
end
Finnian
Finnian Reilly <fin...@eiffel-loop.com>:
--
You received this message because you are subscribed to the Google Groups "Eiffel Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eiffel-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/eiffel-users/59bdd5b9-a3b0-3fc6-a159-bcc969f45218%40eiffel-loop.com.