BigInt optimizations

64 views
Skip to first unread message

mdem...@gmail.com

unread,
Jan 16, 2019, 8:40:31 AM1/16/19
to v8-dev
Hi,

just wanted to ask if anyone is currently working on optimizing BigInts (especially ones that fit in 64bit), or if it is being considered in some prioritetization efforts?

The proposal [0] and the blog post [1] suggest that it is on the roadmap, I'm just wondering if anything is already happening.

I think this would be particulary useful for compilers targeting js (as changing int backend is clearly much easier than changing the target to wasm), such as Scala.js [2], or js_of_ocaml (which is what I have a direct interest in).

Could someone estimate how hard would it be to implement the basic optimizations (like BigInt.asIntN(64, b1 + b2) => native addition)?

Jakob Kummerow

unread,
Jan 16, 2019, 5:49:57 PM1/16/19
to v8-...@googlegroups.com
Nobody is currently working on optimizing BigInts.

It is definitely planned to do it eventually, but it's still unclear which cases matter. Specifically, "BigInts that fit in 64 bits" sounds easy on a high level, but in practice there's a crucial difference between signed and unsigned 64-bit values (what C++ calls "int64_t" and "uint64_t"). If V8 could get by with supporting just one of those, that would make the implementation reasonably straightforward, but would create unintuitive performance cliffs (e.g. deopts when the first value exceeds the range of the common subset, i.e. uint63) or differences (people would reasonably ask "why is asIntN(64, ...) so much faster than asUintN(64, ...)?" or the other way round). If V8 had to support both, however, that would make things vastly more complicated: we would then probably need 4 internal representations (uint63, uint64, int64, big) forming a diamond-shaped lattice of transitions and an (up to?) 16-way dispatch in front of every binary operation, with a bunch of both implementation complexity and runtime cost. So going to these lengths requires some convincing data that it's actually needed.

Maybe we could/should try to start a discussion with the wider community about which BigInt usage patterns are relevant for high performance.

In the meantime, if you want 64-bit integers, modeling them as a class wrapping two 32-bit integers (like Scala.js does currently) is probably going to give you pretty good performance, because the optimizing compiler will be able to eliminate most or all of the allocations. On 32-bit systems (of which there are still hundreds of millions on the internet), that might well be as fast as 64-bit ranged BigInts ever will be.


--
--
v8-dev mailing list
v8-...@googlegroups.com
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to v8-dev+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Maciej Dębski

unread,
Jan 18, 2019, 4:10:32 PM1/18/19
to v8-...@googlegroups.com
Ok, thanks for the exhaustive answer!

You received this message because you are subscribed to a topic in the Google Groups "v8-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/v8-dev/pebz7tVYTss/unsubscribe.
To unsubscribe from this group and all its topics, send an email to v8-dev+un...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages