Hi, quick question:
When compiling to WebAssembly via Binaryen with "-s WASM=1 BINARYEN_METHOD='native-wasm'", will 64-bit integers be handled 'correctly', or will the resulting WebAssembly code split the 64-bit operations into 32-bit ops?
The reason I'm asking is that I'm currently rewriting my 8-bit emulator, and I'm currently relying on fast 64-bit operations in the tick-callback that's executed about a million times per second (I'm packing all CPU pins into a 64-bit integer, hand that to the tick callback, the tick callback inspects and modifies the pins, and hands them back to the CPU emulation).
I'm seeing two things that are suspicious: (1) the resulting code is about 3..4x slower than native x86-64, which is a bit unusual, and (2) I'm not seeing a performance difference betwee asm.js and WASM (I would expect that the asm.js version is slower since it needs to emulate the 64-bit stuff).
The CPU emulation is very sensitive to things like inlining and memory accesses (e.g. very small changes can cut the performance in half in the natively compiled version), so the 32-bit vs 64-bit is only one of the possible reasons I'm looking into.
Once I'm doing more detailed performance investigations for asm.js/wasm I will most likely create simpler tests (I'd really like to know why the web versions are 3..4x slower for this type of code, another reason could be all the bit-twiddling operations on 8- and 16-bit integers).
Cheers,
-Floh.