V8 upgrade prototype suggestion

41 views
Skip to first unread message

Jens-Tiago Mendes Serra Bender

unread,
Oct 24, 2025, 11:22:05 AMOct 24
to v8-...@googlegroups.com
Title: [V8] Quantum-Inspired Adaptive Register Allocation (QIARA) in TurboFan – 15–25% faster execution on register-bound workloads

Component: Blink>JavaScript>V8
CC: v8-...@chromium.org
Labels: Type-Feature, Pri-1, V8-Perf, V8-Compiler, Hotlist-Performance, Eng-Review

=== PROBLEM ===
Register allocation in TurboFan uses classical graph coloring (Chaitin-Briggs), which is NP-hard and often produces suboptimal results under high spill pressure. This causes 20–40% slowdowns in numerical loops (e.g., TensorFlow.js, scientific computing) due to excessive memory spills.

=== SOLUTION ===
Introduce **QIARA (Quantum-Inspired Adaptive Register Allocation)**:
1. When live range count > 8 and spill pressure > threshold, model allocation as a QUBO problem.
2. Solve using **on-device QAOA simulation** (via WASM port of Qiskit/D-Wave Ocean, <5ms on modern CPUs).
3. Apply near-optimal register assignment → minimize spills.
4. Fallback to current allocator if quantum sim unavailable.
5. Optional: Federated learning (opt-in) to improve QAOA seeding over time.

=== V8 HOOKS ===
- `src/compiler/register-allocator.cc` → new `QuantumRegisterAllocator` pass
- `v8::internal::compiler::GraphColoringPhase`
- Background `v8::TaskRunner` for QAOA solve
- WASM runtime via `v8::WasmModule`

=== IMPACT ===
- **+15–25% execution speed** on register-heavy workloads (Octane, Kraken, ML inference)
- **–10–20% power usage** on mobile (fewer cache misses)
- **Zero cloud dependency** — fully on-device
- Future-proof: swaps to real quantum hardware via WebQuantum API (2028+)

=== FEASIBILITY ===
- <5K LOC (mostly WASM + TurboFan integration)
- Prototype-ready with `qiskit-wasm` or `dwave-ocean-wasm`
- Experimental flag: `chrome://flags/#enable-qiara`
- Benchmarks: SPEC CPU2017 JS subset, TensorFlow.js models

=== WHY THIS IS NOVEL ===
No JavaScript engine uses quantum-inspired optimization for JIT compilation. This would be a **world-first** for V8 and position Chrome as the leader in next-gen compiler technology.


Requesting Intent to Prototype — happy to provide PoC or collaborate with TurboFan team.

Leszek Swirski

unread,
Oct 24, 2025, 12:30:24 PMOct 24
to v8-...@googlegroups.com
On Fri, Oct 24, 2025 at 5:22 PM Jens-Tiago Mendes Serra Bender <jens-tiago.mend...@landmarkinternationalschool.co.uk> wrote:
Title: [V8] Quantum-Inspired Adaptive Register Allocation (QIARA) in TurboFan – 15–25% faster execution on register-bound workloads

Component: Blink>JavaScript>V8
CC: v8-...@chromium.org
Labels: Type-Feature, Pri-1, V8-Perf, V8-Compiler, Hotlist-Performance, Eng-Review

Did you actually file this as a bug? I don't see it in crbug.com.
 
=== PROBLEM ===
Register allocation in TurboFan uses classical graph coloring (Chaitin-Briggs), which is NP-hard and often produces suboptimal results under high spill pressure. This causes 20–40% slowdowns in numerical loops (e.g., TensorFlow.js, scientific computing) due to excessive memory spills.

TurboFan uses a linear scan allocator.
 
=== SOLUTION ===
Introduce **QIARA (Quantum-Inspired Adaptive Register Allocation)**:
1. When live range count > 8 and spill pressure > threshold, model allocation as a QUBO problem.
2. Solve using **on-device QAOA simulation** (via WASM port of Qiskit/D-Wave Ocean, <5ms on modern CPUs).

5ms makes this slower than roughly 80% of all total TurboFan compile times -- for a JIT compiler it's as important to be fast as it is to generate good code. Also, there is no need to use Wasm here (except sandboxing, which I suspect is not a concern), TurboFan is compiled into the browser and doesn't need to use web APIs to run code.
 
3. Apply near-optimal register assignment → minimize spills.

On classical hardware this is still an NP-hard problem though, whatever the algorithm? Is the claim here that it tends to approach optimality more often than other graph colouring solvers?
 
4. Fallback to current allocator if quantum sim unavailable.
5. Optional: Federated learning (opt-in) to improve QAOA seeding over time.

=== V8 HOOKS ===
- `src/compiler/register-allocator.cc` → new `QuantumRegisterAllocator` pass
- `v8::internal::compiler::GraphColoringPhase`

 
- Background `v8::TaskRunner` for QAOA solve
- WASM runtime via `v8::WasmModule` 

=== IMPACT ===
- **+15–25% execution speed** on register-heavy workloads (Octane, Kraken, ML inference)

I might be mistaken, but I don't think we've seen this improvement when trying out other register allocators.
 
- **–10–20% power usage** on mobile (fewer cache misses)

JIT compilation also uses power.
 
- **Zero cloud dependency** — fully on-device

I should hope so.
 
- Future-proof: swaps to real quantum hardware via WebQuantum API (2028+)

2028? Seems a bit optimistic (and we wouldn't need a Web API anyway).
 
=== FEASIBILITY ===
- <5K LOC (mostly WASM + TurboFan integration)
- Prototype-ready with `qiskit-wasm` or `dwave-ocean-wasm`
- Experimental flag: `chrome://flags/#enable-qiara`
- Benchmarks: SPEC CPU2017 JS subset, TensorFlow.js models

Our benchmarks focus more on realistic web workloads, like Speedometer or Jetstream.
 

=== WHY THIS IS NOVEL ===
No JavaScript engine uses quantum-inspired optimization for JIT compilation. This would be a **world-first** for V8 and position Chrome as the leader in next-gen compiler technology.

You would probably want to try this in an offline/AOT compiler like LLVM, rather than V8, since compilation time is less of a concern there.
 
Requesting Intent to Prototype — happy to provide PoC or collaborate with TurboFan team.

--
--
v8-dev mailing list
v8-...@googlegroups.com
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to v8-dev+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/v8-dev/CA%2BNJL%2BtRCW3S5NRaMH3KSpareMwwBsEsi-_Z12tWQ8EBgKk9FA%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages