Where To Suggest BigInt Optimizations

66 views
Skip to first unread message

wowze...@gmail.com

unread,
Feb 18, 2020, 11:30:01 PM2/18/20
to v8-dev
I know that no one is working on BigInts right now, so where should I post my suggestions for optimizations so that someone will see them when they start on BigInt optimizations?

I have been doing a lot of work with BigInts recently, and I have quite a few ideas of how to massively optimize BigInts. I looked through the Chromium source code, and there does not seem to be much in the way of compilation optimizations right now. Some of my ideas are simple and effectives, while others may be a lot more complicated and questionable. I even have some controversial suggestions. Without further ado, here are my suggestions.

Simple optimizations are given below. None of the simple optimizations below should require JIST dynamic type-prediction before the optimization can be inserted because if `x` turns out not to be a BigInt, then an error is simply be thrown. However, one could extend the optimizations below to utilize dynamic type-prediction so that situations in which the compiler cannot be certain whether the optimization can be applied do not have to be immediately rejected. None of the optimizations below should slow down general compilation because one can conditionally apply passes for the below optimizations only if the compiler sees BigInt literals in the source code.
  1. Do not create a copy with assignment-operators against constants. For example, `x += 15n` should add 15n to x, not add 15n to a copy of x.
  2. Convert multiplication and division into bit shifting where the number involved is a power of two. For example, `x * 8n` and `x / 4n` becomes `x << 3n` and `x >> 2n` respectively.
  3. Optimize shift-right&and-field into array-like memory access. For example, `(x >> 256n) & 0xffffn` becomes something like `((int *) x)[ 8 ] & 0xffff` (I apologize for my egregious attempt to mimic C).
  4. Optimize bitwise-assignment followed by a number left-shifted up a certain number of bits into array-like memory access. For example, `x |= 103n << 256;` becomes something like `((int *) x)[ 8 ] |= 103`. For example, `x ^= 27n << 192n;` becomes something like `((int *) x)[ 6 ] ^= 103`.
Controversial and medium-difficulty optimizations are given below.
  1.  Allow us to preallocate the size of a BigInt stored initially in a variable at the `var` where the variable is scoped via `0n << allocationBits`. For example, `var x=0n << 65536n` would pre-allocate 65536 bits of memory or 8192 bytes. However, `var x=0n; x <<= 65536n;` would not pre-allocate any memory. For this optimization to be effective, one must also install the deduplication optimization (simple optimization #1).
  2. Optimize for true unsigned 64-bit integer optimizations via and-masks instead of trying something over-complicated like `BigInt.asUint64`, where one would need all sorts of complex hooks incase if the function gets overridden or whatever else might happen. For example, `var k=2n; k=(2n << 5n) & 0xffffffffffffffffn; k = k * k & 0xffffffffffffffffn; k = (k - (k * k & 0xffffffffffffffffn)) & 0xffffffffffffffffn; k = k / 71n & 0xffffffffffffffffn;` becomes entirely fast unsigned 64-bit operations. It looks ugly, but it's not half as bad as it looks: Gzip would minimize the impact on file size.  I would imagine that doing these optimizations in this manner might simply be as simple as just copy-paste-and-tweak all the almost-asm integer optimizations: `i = i + 1|0` with almost-asm integer optimizations becomes `i = i + 1n & 0xffffffffffffffffn` with unsigned 64-bit optimizations. This would not have to be the permanent way that 64-bit operations are done. Instead, think of it like a piece of candy being freely giving to us performance-enthusiasts.
Advanced, complex, and difficult optimizations are given below.
  1. Use reference-counting to further reduce unnecessary copying, which sucks up a great deal more of the execution time than it should because memory not in the cache is sooooo slow. Each BigInt is a pointer to the space where it is actually stored. At the stored data, add an extra field which we shall call bnReferenceCount. When a BigInt variable appears on the left side of an assignment, its reference count goes down by one. For each reference on the right side of the assignment, increment bnReferenceCount by one. When a BigInt variable is used, its bnReferenceCount is checked. If bnReferenceCount <= 1, then it is OKAY for the operator to modify the original contents of the BigInt variable without creating a copy. If bnReferenceCount > 1, then create a copy of the variable (the copy has a bnReferenceCount of 1) and decrement the value of the bnReferenceCount field on the original. Let us examine `x = x ** 5n + x`. Prior to the statement, the bnReferenceCount of the BigInt pointed to by `x` is 1. The compiler statically asserts that the assignment to x on the left decrements bnReferenceCount to 0 and the next two uses of x increment bnReferenceCount by 2, resulting in a net value of 2 for the bnReferenceCount of `x`. Right before the statement is executed at runtime, we subtract 1 and add 2 to the bnReferenceCount field of the object pointed to by `x`, yielding a bnReferenceCount of 2 for `x`. Then, the first occurance of `x`, where x is exponentiated, creates a copy of `x` and decrements the bnReferenceCount of the original because there is one less thing pointing to the original. Then, the copy is raised to the power of five without further copying because copies, by default, have a bnReferenceCount of 1. The next occurance of x does not initiate a copy-operation because its bnReferenceCount is 1. During the final addition, both the copy (left) and the original (right) are candidates for storing the result of the addition because they both have a bnReferenceCount of 1. So, the longer one--copy--is chosen as the final holder. the copy and the original are added and the result is stored in copy. Then, because the original is no longer used, its bnReferenceCount is decremented to 0. Because its bnReferenceCount reaches 0, the original can be submitted to the garbage collector for the next cycle. Finally, the copy is assigned to the variable x, thus incrementing the copy's bnReferenceCount to 2. The statement has finished and the last BigInt to be used was the copy, so the copy's bnReferenceCount is decremented down to 1. After the statement, `x` holds a new BigInt whose bnReferenceCount is 1.

wowze...@gmail.com

unread,
Feb 18, 2020, 11:48:37 PM2/18/20
to v8-dev
Example usage: I am currently working on a function to approximate the exp function for big integers (exp(x) is e**x, where e is a constant about 2.718). To preallocate the resulting BigInt, I would use medium-difficulty #1: `var result = 0n << (3n ** x)`. Then, I estimate the value into the result variable via various means and methodologies. At the end of the function, I force a copy-paste operation to release the excess memory allocated by `3n**x` using advanced-difficulty #1: `return result + 0n * result;`.  I hope this helps.

Thank you everyone!

Jakob Gruber

unread,
Feb 19, 2020, 1:01:30 AM2/19/20
to v8-...@googlegroups.com, Nico Hartmann, Jakob Kummerow

--
--
v8-dev mailing list
v8-...@googlegroups.com
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to v8-dev+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/v8-dev/469414e2-a347-41c6-8726-fc5ffb83587e%40googlegroups.com.

Jakob Kummerow

unread,
Feb 19, 2020, 8:06:04 AM2/19/20
to v8-...@googlegroups.com
Thanks for these suggestions.

Taking a step back, we would be interested in learning more about your use case. Why do you care about exp() for BigInts? What is the expected size of the BigInts you're working with? Which operations exactly are relevant for performance for your case? At what approximate weight are you seeing (or expecting to see) these computations in your app's/site's CPU profile? Do you have a representative benchmark you can share?

In general, we are aware that most operations on BigInts currently have a lot of overhead because of the not-at-all-optimized implementation method we chose for our baseline version. We do have an ongoing low-priority project to optimize them one by one. That will provide massive speedups and automatically include some of your suggestions; other, more advanced tricks will only make sense afterwards (because right now the performance delta would be dwarfed by the overhead we still have, to the point of not being measurable).

A few more specific comments inline:

On Wed, Feb 19, 2020 at 5:30 AM <wowze...@gmail.com> wrote:
I know that no one is working on BigInts right now, so where should I post my suggestions for optimizations so that someone will see them when they start on BigInt optimizations?

I have been doing a lot of work with BigInts recently, and I have quite a few ideas of how to massively optimize BigInts. I looked through the Chromium source code, and there does not seem to be much in the way of compilation optimizations right now. Some of my ideas are simple and effectives, while others may be a lot more complicated and questionable. I even have some controversial suggestions. Without further ado, here are my suggestions.

Simple optimizations are given below. None of the simple optimizations below should require JIST dynamic type-prediction before the optimization can be inserted because if `x` turns out not to be a BigInt, then an error is simply be thrown. However, one could extend the optimizations below to utilize dynamic type-prediction so that situations in which the compiler cannot be certain whether the optimization can be applied do not have to be immediately rejected. None of the optimizations below should slow down general compilation because one can conditionally apply passes for the below optimizations only if the compiler sees BigInt literals in the source code.
  1. Do not create a copy with assignment-operators against constants. For example, `x += 15n` should add 15n to x, not add 15n to a copy of x.
In general, we can't do this, because BigInts are immutable after creation (per spec!), and that's observable. Consider:
let x = 42n;
let backup = x;  // You have to copy either here...
x += 15n;  // ...or here, and the latter is usually more efficient.
console.assert(backup === 42n);  // Otherwise this will fail.
So in the general case this would need refcounting, which we don't have.
In the specific case where the compiler can prove that a given BigInt is non-escaping and the addition is truncating to a small enough size, we're already doing what you suggest. We still have to apply the same optimization to most other operations. We could conceivably extend it to somewhat less restrictive scenarios if we determine that it actually matters for those.
  1. Convert multiplication and division into bit shifting where the number involved is a power of two. For example, `x * 8n` and `x / 4n` becomes `x << 3n` and `x >> 2n` respectively.
This is possible (and I think we already do it for Numbers), but not as easy as it seems at first, because the suggested transformations change semantics for negative inputs:
-7n / 2n === -3n
-7n >> 1n === -4n
  1. Optimize shift-right&and-field into array-like memory access. For example, `(x >> 256n) & 0xffffn` becomes something like `((int *) x)[ 8 ] & 0xffff` (I apologize for my egregious attempt to mimic C).
Again, this wouldn't do what you expect for negative BigInts. 
For positive BigInts, it's not immediately obvious that this would yield significant benefit, because you'd still have to do a bunch of checks (is it positive? is it long enough? is it a suitable shift amount?) and checks/branches quickly become more expensive than executing straight-line code. (I'm not saying I'm sure it's not worth doing, just that I'm also not sure that it is worth it.)
  1. Optimize bitwise-assignment followed by a number left-shifted up a certain number of bits into array-like memory access. For example, `x |= 103n << 256;` becomes something like `((int *) x)[ 8 ] |= 103`. For example, `x ^= 27n << 192n;` becomes something like `((int *) x)[ 6 ] ^= 103`.
See the first point. Since BigInts are never modified in-place, this is not applicable.

Controversial and medium-difficulty optimizations are given below.
  1.  Allow us to preallocate the size of a BigInt stored initially in a variable at the `var` where the variable is scoped via `0n << allocationBits`. For example, `var x=0n << 65536n` would pre-allocate 65536 bits of memory or 8192 bytes. However, `var x=0n; x <<= 65536n;` would not pre-allocate any memory. For this optimization to be effective, one must also install the deduplication optimization (simple optimization #1).
Since BigInts are never modified in-place, the concept of preallocation is not applicable. 

Maybe the previous three points would be more suitably addressed by providing conversion functions between BigInts and Uint8Array/Uint32Array/ArrayBuffer (in both directions), to allow you to directly work with the underlying numeric data. These conversions would copy the bits, but if you need efficient access to enough bytes of a large-enough BigInt, that would probably be worth it. Obviously this is a spec question, not something that V8 could just offer on its own. See https://github.com/tc39/proposal-bigint/issues/163 for some previous discussion.
  1. Optimize for true unsigned 64-bit integer optimizations via and-masks instead of trying something over-complicated like `BigInt.asUint64`, where one would need all sorts of complex hooks incase if the function gets overridden or whatever else might happen. For example, `var k=2n; k=(2n << 5n) & 0xffffffffffffffffn; k = k * k & 0xffffffffffffffffn; k = (k - (k * k & 0xffffffffffffffffn)) & 0xffffffffffffffffn; k = k / 71n & 0xffffffffffffffffn;` becomes entirely fast unsigned 64-bit operations. It looks ugly, but it's not half as bad as it looks: Gzip would minimize the impact on file size.  I would imagine that doing these optimizations in this manner might simply be as simple as just copy-paste-and-tweak all the almost-asm integer optimizations: `i = i + 1|0` with almost-asm integer optimizations becomes `i = i + 1n & 0xffffffffffffffffn` with unsigned 64-bit optimizations. This would not have to be the permanent way that 64-bit operations are done. Instead, think of it like a piece of candy being freely giving to us performance-enthusiasts.
While I agree that asUintN is pretty useless to have in the spec, I don't think that and-masks would provide significantly better performance. They would simply be just as good.
That said, currently our compiler detects asUintN and makes use of that information, whereas we haven't taught it yet to also detect and special-case and-masks.
 
Advanced, complex, and difficult optimizations are given below.
  1. Use reference-counting to further reduce unnecessary copying, which sucks up a great deal more of the execution time than it should because memory not in the cache is sooooo slow. Each BigInt is a pointer to the space where it is actually stored. At the stored data, add an extra field which we shall call bnReferenceCount. When a BigInt variable appears on the left side of an assignment, its reference count goes down by one. For each reference on the right side of the assignment, increment bnReferenceCount by one. When a BigInt variable is used, its bnReferenceCount is checked. If bnReferenceCount <= 1, then it is OKAY for the operator to modify the original contents of the BigInt variable without creating a copy. If bnReferenceCount > 1, then create a copy of the variable (the copy has a bnReferenceCount of 1) and decrement the value of the bnReferenceCount field on the original. Let us examine `x = x ** 5n + x`. Prior to the statement, the bnReferenceCount of the BigInt pointed to by `x` is 1. The compiler statically asserts that the assignment to x on the left decrements bnReferenceCount to 0 and the next two uses of x increment bnReferenceCount by 2, resulting in a net value of 2 for the bnReferenceCount of `x`. Right before the statement is executed at runtime, we subtract 1 and add 2 to the bnReferenceCount field of the object pointed to by `x`, yielding a bnReferenceCount of 2 for `x`. Then, the first occurance of `x`, where x is exponentiated, creates a copy of `x` and decrements the bnReferenceCount of the original because there is one less thing pointing to the original. Then, the copy is raised to the power of five without further copying because copies, by default, have a bnReferenceCount of 1. The next occurance of x does not initiate a copy-operation because its bnReferenceCount is 1. During the final addition, both the copy (left) and the original (right) are candidates for storing the result of the addition because they both have a bnReferenceCount of 1. So, the longer one--copy--is chosen as the final holder. the copy and the original are added and the result is stored in copy. Then, because the original is no longer used, its bnReferenceCount is decremented to 0. Because its bnReferenceCount reaches 0, the original can be submitted to the garbage collector for the next cycle. Finally, the copy is assigned to the variable x, thus incrementing the copy's bnReferenceCount to 2. The statement has finished and the last BigInt to be used was the copy, so the copy's bnReferenceCount is decremented down to 1. After the statement, `x` holds a new BigInt whose bnReferenceCount is 1.
Yeah, not going to happen in the general case, because V8 isn't designed around reference-counting, and introducing it just for this use case just wouldn't be worth the immense implementation complexity. That said, for the specific case of non-escaping temporary values inside a function, the compiler can figure this out without any explicit reference counting, thanks to using SSA form.

wowze...@gmail.com

unread,
Feb 22, 2020, 10:27:37 PM2/22/20
to v8-dev
Jakob Kummerow,

Thank you for responding back so quickly. I apologize for my delay. Thank you for being so polite to me and thank you so much for point out such good points about my suggestions. Reading your answer in retrospective, it does appear that I came across as an unthinking ignoramus who likes to flaunt his option of things he know little about. In this response, I hope that I can rectify the situation and shatter the negative image I unintentionally built.

As for exp on BigInts, I am designing a library I am going to call BigMath. (Uggggghhhh.... library, right? Something that is going to needlessly bloat the download size by several hundred megabytes?!?! Well, I hate libraries like that, so my library will encourage users to use its piecemeal versions to avoid those problems.) It will feature BigInt versions of the ordinary Math functions, and errors will be thrown to handle Infinity and NaN cases. My hope is that eventually, maybe, BigMath might make it into the specification so that everyone can enjoy the benefits of a Math library so that coders everywhere do not have to frequently reinvent the wheel with their BigInt code. I am not yet finished with my exp function because I am working out how to approximate the convergence of continued fractions, but I have finished all the BigInt stuff that the exp function will work with. See my convergence benchmark on PasteBin. I hope that it gives you a good sense for the way people are using BigInts.

You are absolutely correct about your first point, and I suggest that we copy at the first location: assignment.
let x = 42n;
let backup = x;  // copy here
x += 15n;        // assignment here
console.assert(backup === 42n);  // This will *now* work properly

I strongly believe that copying at variable assignment is much more performant because you never see any (smart) ones needlessly copy variables. JavaScript coders copy variables to manipulate them. If the variable is not copied at assignment, then it would have to be copied where it is used in a nearby mathematical expression. Thus, there is no extra penalty incurred by copying the variables at the location of assignment because those variables are going to be used anyway. Infact, with my suggestion for mutable-operations, there will be much performance gain because, after the variable is copied for assignment to another variable, no more copying is needed for whatever successive math follows in the code. The only problem I can think of is function arguments. The preceding paragraph is contingent upon function argument optimizations, as explained in the next paragraph, otherwise I entirely agree with you about the copy-at-assignment being less performant.
function pow2PlusPow3(inBgN) {
var x = inBgN | 0n; // one option is to optimize for `|0n`, `>>0n`, and `<<0n` indicating that inBgN must be a BigInt, else an error is thrown
var y = x;
x *= y *= x; // example operation that might be done
return x + y;
}

There's one critical performance problem I can think of with copying at assignment: function arguments. In order to dumbly copy at assignment, we would also have to dumbly copy BigInts passed as arguments to functions. Thus, for there to be a real performance gain from BigInts copied at assignment, we would have to get smart with passing BigInts to function. My limited understanding is that there are already special optimizations that optimize function calls such that functions which modify their arguments must extend the stack further to create additional references variables to hold the new values of the arguments. If this is the case, then perhaps you could modify the code handling those optimizations to include checks for BigInts such that, when BigInts passed as an argument are copied into a new variable due to the argument to the function being modified inside the function call, the BigInt variable's value is copied into a new BigInt. For example, in the pow2PlusPow3 function above, inBgN is immediately assigned to a variable, copying it right there. In the example below, there would never be any need to copy bgN.
function factorial(inBgN) {
var result = 1n;
for (var i=2n; i<inBgN; i++) result *= i;
return result;
}

I want to clarify that I am not expecting you to work magic here. Rather, I am just posing a suggestion and am wondering if you think it could be practical to implement. I value your knowledge and opinion very highly as you know a lot more about the Chromium engine than I do and you are an intelligent kind person.


For transformations of division into bit-shifts, I have a solution to the problem of negative numbers: add one to the sign if it's a negative number. I strummed up the following slow crummy bad code to confirm that (any negative number plus 1n)>>1n is the same as (any negative number) / 2n.
for (var i=-0b10101n; i<0; i++)
if (BigInt.asUintN(8, (i+1n) >> 1n) !== BigInt.asUintN(8, i / 2n))
console.error(i);
Also, I would like to clarify the extent of this optimization because I failed to do that previously. It would be extremely unlikely to divide by a variable containing two in most circumstances, so putting this optimization into the division method would likely slow it down more than it improves it. Instead, all I suggest for this optimization is that it be applied to brain-dead cases where the variable is being divided by a constant instead of a variable. I can guarantee that there are programmers out there without a care in the world for performance who will use division all over their code and no bitshifts. So, I do believe that these optimizations are crucial for performance because I can only imagine how slow the BigInt division must be.
const divisor = 4n;
const dividedN =  50n / divisor; // does not get optimized into bitshifts
const another = divisor * 32n;   // does indeed get optimized into `(divisor < 0n ? divisor+1n : divisor) << 32n`
const remainder = another % 16n; // does indeed get optimized into `(another < 0n ? another+1n : another) & 15n `
const remainder = 2n ** divisor; // does indeed get optimized into `if (divisor >= 0n) remainder = 2n << divisor; else throw Error("…");`
const dividedN =  divisor ** 7n; // does not get optimized into bitshifts


As for optimizing bitwise shifts into memory access, it's the same deal: purely a compile-time optimization and not a run-time optimization. You are absolutely correct that trying to do all the necessary checks at runtime would result in a general slowdown. However, doing them at compile-time would, in many cases, result in a general speedup. Observe.
const hugeNumber = 11n ** 27n ** 3n;
const intAt65536thByte = (hugeNumber >> 65536n) & 4294967295n;          // is indeed optimized into a single memory array access at compile-time
const randShift = 40820n;
const intAtRandomShiftInt = (hugeNumber >> randShift) & 4294967295n;    // partially optimized into either one or two memory array accesses at compile-time
const varBitMask = 2n ** 64n - 1n;
const sizedBitAreaAtRandShift = (hugeNumber >> randShift) & varBitMask; // does not get optimized into array access
const sizedBitAreaAt32768Bits = (hugeNumber >> 32768n) & varBitMask;    // does not get optimized into array access
The speed-up from this optimization would be tremendous because, instead of shifting down the bits into a new BigInt AND THEN performing a bitwise-and to yield a second copy, there needs only to be a single copy operation into a known-size BigInt from a specific offset in memory.



As for the conversion between BigInts and Uint8Array/Uint32Array/ArrayBuffer, I have my own controversial thoughts on the matter. In the majority of cases where it would be most helpful to edit the underlying bits of the BigInt, only a small handful of bits would be modified. Thus, copying the BigInt into a new Uint8Array/Uint32Array/ArrayBuffer would do little good performance-wise and people would be forced to use ordinary means of manipulating BigInts with math operators. Something that would be massively helpful to performance is a true view of the real data and not just a copy. What I am referring to is a live typed ArrayBuffer view of the BigInt such that edits to the ArrayBuffer are directly reflected on the BigInt without having to needlessly copy any data. This would allow for lickety-fast operations on BigInts and would be a god-sent to the performance-oriented niche of programmers like me.
var bigNum = 12345n ** 12345n;


As for the bit-masks, I completely agree that they do not add any performance, but that's not their purpose. The purpose of the bit-masks is to allow for the compiler to make more optimistic assertions with more confidence once the compiler is properly trained to utilize them. Observe the example below.
BigInt.asIntN = function(){};
It's legitimate specs-compliant JavaScript that the compiler must anticipate and be ready for because JavaScript sometimes stinks. Bitmasks, on the other hand, are as constant as the North Star. The compiler is able to make assertions about these bit masks yielding 64-bit unsigned integers with 100% confidence, meaning 0 overhead and completely unobstructed native 64-bit unsigned integers for lickety-fast code (if the compiler is trained on how to utilize them). I am arguing for bitmasks not for performance, instead because I believe that it will be easier to do more better optimizations with them.


As for the reference counting, I am merely suggesting changes mostly localized to the BigInt file. Instead of sprinkling reference counting code all over the Chromium engine, all my suggestion is is to sprinkle reference counting code all over the single BigInt file. BigInts would have their own internal system of pointer counting that only code in the BigInt file would understand and code elsewhere would consider part of the BigInt payload. That being said, I think you are absolutely right that, as huge of an optimization as this could be, it would be a massive undertaking and is, if ever, several years/decades/millenia away from being attempted.


Thank you so much for all of your insights into my suggestions. I hope that my commentary in response to your clever points will be beneficial to the project. I look forward to your response. Thank you so much!

Sincerely,
Jack Giffin

Leszek Swirski

unread,
Feb 24, 2020, 5:23:53 AM2/24/20
to v8-dev
Hi Jack,

First of all, I wanted to make sure that you know that no one thinks of you as "an unthinking ignoramus", nor have you built a negative image. Many of the ideas you're proposing here have also been suggested both internally and externally during the BigInt design and implementation process by very senior, very experienced engineers.

Regarding your description of copy-on-assigment: it's a valid idea, and it would probably work. However, it has downsides in both (implicit) semantics and implementation. Fundamentally, in JavaScript (like in Python or Java, but not other languages, like C++), assignment is considered to be equivalent to re-naming, or creating a new reference. it's not expected to copy, and therefore the performance expectation for writers of code is that it won't incur the memory and runtime costs of copying. This isn't just a problem for arithmetic code -- for example, generic code which caches values into a dictionary (by assignment!) won't expect a difference between caching a Number or a BigInt.

Similarly, in the engine we assume that assignments are non-copying, and can _unconditionally_ copy the object's reference for every assignment. Adding in a special case for BigInts means adding a runtime check on _every_ assignment, which means this performance improvement for BigInts would cause every single other assignment to become slower. We already try to optimize away assignments whenever we can, but if we don't know their runtime semantics (because we don't yet know the types) then we won't be able to perform those optimisations (at least not in the initial bytecode compilation). We also assume that "x += y" is equivalent to "x = x + y", and again, changing that would have ripple effects throughout the implementation.

Taking a step back from the details of this proposal: language design is hard, and it's full of compromises. One of the hardest things for me to accept while working on the V8 team is that all the easy problems are already solved, and that sometimes there is no best solution, only a chosen compromise. In this case, we've taken the compromise of maintaining the same semantics for BigInts as for Numbers (i.e. that they are immutable), and to avoid adding overhead to the rest of the JavaScript implementation by adding copy-on-write or copy-on-assignment special cases for BitInts.

Perhaps, if your aim is to create a library which you want to standardise, you should rather look into submitting a proposal to TC39 outlining the library features you would want to have? That way we could implement these fundamental operations natively. You mentioned exp in particular -- is the existing x ** y not sufficient for your use-case?

- Leszek

--
--
v8-dev mailing list
v8-...@googlegroups.com
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to v8-dev+un...@googlegroups.com.

wowze...@gmail.com

unread,
Feb 28, 2020, 4:36:43 PM2/28/20
to v8-dev
Thank you so much for responding to my suggestions and thank you so much for your detailed explanation. I am very glad to hear that many of the ideas I have proposed here have also been suggested elsewhere. Thank you for hearing my two cents and allowing me to participate in the project.  I hope that seeing my usage of BigInts on pastebin might help serve as an example of how people are using BigInts. In regard to your last comment, the x ** y is very much insufficient for my case because e has an irrational non-repeating decimal. See the wikipedia page. In theory, I could precompute e to like 2 ** 20 digits, do `eLookUp ** x`, and then shift it down: `eLookUp ** x >> (x << 20n)`. However, in practice, this would be extremely impractical in terms of performance and memory. For larger values of x, it might take days to compute the result if it does not run out of memory first. Instead, I am using generalized continued fractions to approximate the result by swapping the numerator and the denomination to divide and cross-multiplying to add. I hope this answers your question about why I am not using x ** y.Thank you.
Reply all
Reply to author
Forward
0 new messages