Hi binary-size@
We are working on finalizing the Oilpan library that will be shipped through V8 (
CL).
We currently observe a ~172KiB regression in binary size which mainly happens due to changes in inlining heuristics on the allocation fast path, which should also result in better performance. We are coming down from ~300KiB and have already invested a bit into this. (The library itself is a win: -43 vs +27.)
We may be able to squeeze out a few more KiB but generally are actually happy with the change in the inlining heuristics as they should improve overall allocation speed on the fast path.
IIUC, then this is not a blocker these days anymore, is that correct?
Cheers, Michael