Nobody is currently working on optimizing BigInts.
It is definitely planned to do it eventually, but it's still unclear which cases matter. Specifically, "BigInts that fit in 64 bits" sounds easy on a high level, but in practice there's a crucial difference between signed and unsigned 64-bit values (what C++ calls "int64_t" and "uint64_t"). If V8 could get by with supporting just one of those, that would make the implementation reasonably straightforward, but would create unintuitive performance cliffs (e.g. deopts when the first value exceeds the range of the common subset, i.e. uint63) or differences (people would reasonably ask "why is asIntN(64, ...) so much faster than asUintN(64, ...)?" or the other way round). If V8 had to support both, however, that would make things vastly more complicated: we would then probably need 4 internal representations (uint63, uint64, int64, big) forming a diamond-shaped lattice of transitions and an (up to?) 16-way dispatch in front of every binary operation, with a bunch of both implementation complexity and runtime cost. So going to these lengths requires some convincing data that it's actually needed.
Maybe we could/should try to start a discussion with the wider community about which BigInt usage patterns are relevant for high performance.
In the meantime, if you want 64-bit integers, modeling them as a class wrapping two 32-bit integers (like Scala.js does currently) is probably going to give you pretty good performance, because the optimizing compiler will be able to eliminate most or all of the allocations. On 32-bit systems (of which there are still hundreds of millions on the internet), that might well be as fast as 64-bit ranged BigInts ever will be.