Hi,
We (V8 team) don't have much interest in this feature at this time, mostly because of the relative lack of common use cases. That means you shouldn't expect to get much help from us (personally I'm not planning to do more than write this email), but of course that doesn't stop you from building a V8-based prototype yourself. You can, at least at first, simply do that locally, i.e. without upstreaming it right away -- that might follow later, when your code is in good shape and the proposal is advancing.
To get a V8 checkout, see
https://v8.dev/docs/source-code. (In short: get depot_tools and then do
fetch v8. I think everything else isn't necessary at first. In particular,
git cl upload is a convenient way to share your work with others, but not required for local development.)
See other docs on
v8.dev/docs for helpful tips on how to build, how to set up IDEs, debuggers, etc.
Keep in mind that various types of builds all exist for a reason: x64.release is for performance measurements, x64.optdebug is for running tests with DCHECK coverage, x64.debug is for interactive debugging. Substitute arm64 for x64 if that matches your hardware.
You can add new Wasm instructions in src/wasm/wasm-opcodes.h.
They'll be decoded by code in src/wasm/function-body-decoder-impl.h.
The baseline compiler is implemented in src/wasm/baseline/liftoff-compiler.cc, and the platform-specific parts in liftoff-assembler*. Tip: focus on one platform first, then do the others once that's working (or even later, only when you approach upstreaming). Liftoff is the easier compiler to work with; for a first stab at getting things working it's also the optional one: you can simply make it bail out for the new operations. By virtue of being the baseline compiler, it's not suitable for performance measurements.
The most time-consuming part will be integration with the optimizing compiler. The entry point is in
src/wasm/turboshaft-graph-interface.cc. You may then have to update various places along the compilation pipeline in
src/compiler/turboshaft/* and
src/compiler/backend/*. I don't have a specific suggestion how to design the changes there -- perhaps add an option to
FloatBinopOp, and check if that needs special handling in various transformations? Tracing how various bits of code fit together can be a bit difficult due to the amount of templatization; I'd recommend to use plaintext search in addition to semantic indexing; in particular grep for "ReduceFoo" to find out what happens to "Foo" operations in various stages. You can run d8 with
--print-wasm-code to see if the generated code matches your expectations; for debugging the compilation process it's often useful to use
--trace-turbo and open the resulting .json file with our visualization tool
Turbolizer.
You can ignore the "Turbofan" (as opposed to "Turboshaft") pipeline, as that's about to be turned off.
For tests, you can add the new instructions to test/mjsunit/wasm/wasm-module-builder.js, and then follow the many examples in test/mjsunit/wasm/* for defining and executing Wasm modules that use them.
I'm aware that this isn't a very detailed plan, and your learning curve will be steep. Good luck!
--Jakob