const fastResult = runtime::ArrayFastFlat(fastO, len, depthSmi);Amemiya Riyait would be good to implement this fast path in torque/CSA rather than a runtime function, so that we don't have to pay the runtime call overheads. This is true in general, we usually have the fast paths written in torque/CSA and only call out to the runtime in slow paths.
Thanks for the feedback! I've updated the implementation to use Torque/CSA instead of a runtime function.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Since you seem busy at the moment, I've added additional reviewers as suggested by the guidelines. I would appreciate it if you could take a look when you have a chance.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
sorry for being slow, it's a large commit and needs some dedicated time to review
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
sorry for being slow, it's a large commit and needs some dedicated time to review
Thanks for getting back to me. If anything, I'm sorry for the nudge. I appreciate you taking the time to review such a large change.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Sorry again for the slow review. Overall the idea looks very nice, here's some comments and ideas.
a = NewJSArray(map, this.fixedArray);why not immediately initialise this with `elements`?
case (TheHole): {
}I think you have to set `undefined` values for incoming holes too, because the flattened result will `Get` from the input and read `undefined`, not `hole`.
You can test this by adding a test case like:
```
let a = [,,];
let b = [a].flat();
assertEquals(a[0], undefined);
assertEquals(b[0], undefined);
Array.prototype[0] = 42;
assertEquals(a[0], 42);
assertEquals(b[0], undefined);
```
Please double check current behaviour when you do this just to make sure I'm right.
let stack = growable_fixed_array::NewGrowableFixedArray();rather than an allocated stack, what about using the machine stack by having a recursive call to a helper?
element = fastOW.LoadElementNoHole(index) otherwise FoundHole;if the inputs are DOUBLE_ELEMENTS, this will need to materialize a new Number. It would be nice if `CalculateFlattenedLengthFast` also calculated whether _all_ input arrays are DOUBLE_ELEMENTS, and if yes, you could have a separate fast path that avoids this allocation and preallocates the FlatVector already as a `FixedDoubleArray`.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
why not immediately initialise this with `elements`?
Thank you for pointing this out. There was no particular reason for the two-step initialization — I simply hadn't considered passing elements directly. I've updated it to initialize with elements immediately after allocation.
case (TheHole): {
}I think you have to set `undefined` values for incoming holes too, because the flattened result will `Get` from the input and read `undefined`, not `hole`.
You can test this by adding a test case like:
```
let a = [,,];
let b = [a].flat();
assertEquals(a[0], undefined);
assertEquals(b[0], undefined);
Array.prototype[0] = 42;
assertEquals(a[0], 42);
assertEquals(b[0], undefined);
```Please double check current behaviour when you do this just to make sure I'm right.
Thank you, I double-checked this. Holes in nested arrays should be correctly skipped since HasProperty returns false for them, so [,,].flat() produces [] (length=0). After setting Array.prototype[0] = 42, b[0] returns 42 via the prototype chain, which appears to match the slow path behaviour. Let me know if I'm missing something.
let stack = growable_fixed_array::NewGrowableFixedArray();rather than an allocated stack, what about using the machine stack by having a recursive call to a helper?
Thank you for the suggestion. I tried converting this to a recursive builtin, but ran into a Torque compiler limitation — inlining complex macros (FastJSArrayWitness + try/label FoundHole + continue) into a builtin body causes the compiler to crash with a stack overflow (SIGSEGV). Additionally, FastJSArray is a transient type and cannot be used as a builtin argument. I've kept the allocated stack approach for now, but happy to revisit if there's a workaround I'm missing.
element = fastOW.LoadElementNoHole(index) otherwise FoundHole;if the inputs are DOUBLE_ELEMENTS, this will need to materialize a new Number. It would be nice if `CalculateFlattenedLengthFast` also calculated whether _all_ input arrays are DOUBLE_ELEMENTS, and if yes, you could have a separate fast path that avoids this allocation and preallocates the FlatVector already as a `FixedDoubleArray`.
Thank you for the suggestion. I've added an allDouble flag to CalculateFlattenedLengthFast that tracks whether all leaf elements are Numbers. When true, TryFastFlat now writes directly into a FixedDoubleArray,avoiding the FlatVector intermediate step and the HeapNumber allocation overhead.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
case (TheHole): {
}Amemiya RiyaI think you have to set `undefined` values for incoming holes too, because the flattened result will `Get` from the input and read `undefined`, not `hole`.
You can test this by adding a test case like:
```
let a = [,,];
let b = [a].flat();
assertEquals(a[0], undefined);
assertEquals(b[0], undefined);
Array.prototype[0] = 42;
assertEquals(a[0], 42);
assertEquals(b[0], undefined);
```Please double check current behaviour when you do this just to make sure I'm right.
Thank you, I double-checked this. Holes in nested arrays should be correctly skipped since HasProperty returns false for them, so [,,].flat() produces [] (length=0). After setting Array.prototype[0] = 42, b[0] returns 42 via the prototype chain, which appears to match the slow path behaviour. Let me know if I'm missing something.
Thanks, I missed the `Has` in the `flat` spec. In that case, I think you're already skipping holes before calling `StoreResult`, which means you should be able to mark the Hole case here unreachable.
let stack = growable_fixed_array::NewGrowableFixedArray();Amemiya Riyarather than an allocated stack, what about using the machine stack by having a recursive call to a helper?
Thank you for the suggestion. I tried converting this to a recursive builtin, but ran into a Torque compiler limitation — inlining complex macros (FastJSArrayWitness + try/label FoundHole + continue) into a builtin body causes the compiler to crash with a stack overflow (SIGSEGV). Additionally, FastJSArray is a transient type and cannot be used as a builtin argument. I've kept the allocated stack approach for now, but happy to revisit if there's a workaround I'm missing.
let's keep it as-is and revisit separately then, thanks for checking.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
case (TheHole): {
}Amemiya RiyaI think you have to set `undefined` values for incoming holes too, because the flattened result will `Get` from the input and read `undefined`, not `hole`.
You can test this by adding a test case like:
```
let a = [,,];
let b = [a].flat();
assertEquals(a[0], undefined);
assertEquals(b[0], undefined);
Array.prototype[0] = 42;
assertEquals(a[0], 42);
assertEquals(b[0], undefined);
```Please double check current behaviour when you do this just to make sure I'm right.
Leszek SwirskiThank you, I double-checked this. Holes in nested arrays should be correctly skipped since HasProperty returns false for them, so [,,].flat() produces [] (length=0). After setting Array.prototype[0] = 42, b[0] returns 42 via the prototype chain, which appears to match the slow path behaviour. Let me know if I'm missing something.
Thanks, I missed the `Has` in the `flat` spec. In that case, I think you're already skipping holes before calling `StoreResult`, which means you should be able to mark the Hole case here unreachable.
Thank you for confirming. I've marked the Hole cases as unreachable.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
@ish...@chromium.org @ol...@chromium.org
Hello. I would need one more review to merge this, so would you be able to take a look when you have a chance? I also don't have the permission to submit to CQ, so I would really appreciate it if you could do that after review. Thank you very much!
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
I have fixed the test failures. Could you please take a look when you have a chance?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Thank you for taking the time to review this multiple times.
Since I do not have try job permissions, could you please trigger a CQ Dry Run again?
Once the tests pass, it would be great to get a review from one more person, as I need a second +1 to merge this.
Additionally, as I don't have permission to click "SUBMIT TO CQ", could you please submit it for me when it's ready?
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Thank you for triggering the dry run. The tests have passed!
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Thanks for the contribution. This is a nice improvement indeed.
My high-level comment here is that we should be weary of introducing too much polymorphism. On the one hand from generalizing where not needed (smi -> double) but also from specializing where we don't have a witness that this is a stable situation and a target elements kind which is expensive to get out of again (normal -> double)
Wdyt about lower bounding the target elements kind by the max leaf elements kind (modulo holey)?
if (IsDoubleElementsKind(kind)) {I am not sure I like the idea of converting something that was not a double array to a double array. This has the potential of causing additional polymorphism and expensive conversions. E.g., if we have code that is repeatedly calling `flat()` on an array and then adding more arrays to the result.
const elementArray: FastJSArray =in general if the the array is packed or holey smi oder double, then there is no need to scan it, because we already know it won't contain any arrays! same for typed arrays. (and ofc same for the source array itself).
if (allDouble && !Is<Number>(element)) allDouble = false;This will convert SMI arrays to DOUBLE arrays. I don't think this is what we want.
return NewJSArray(doubleMap, doubleElements);I find it a bit confusing that this case does not use the NewFlatVector helper. It took me a while to understand why we have this, and the ToDouble conversion case in the helper.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Cast<FastJSArray>(element) otherwise goto Bailout;Here (and probably most similar casts) you could use `Cast<FastJSArrayForRead>` instead.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
I am not sure I like the idea of converting something that was not a double array to a double array. This has the potential of causing additional polymorphism and expensive conversions. E.g., if we have code that is repeatedly calling `flat()` on an array and then adding more arrays to the result.
Thanks for the suggestion. I've introduced a targetKind variable derived from the elements kind of each array using GetPackedElementsKind and GetMoreGeneralElementsKind. This tracks the most general input kind and avoids spurious widening.
in general if the the array is packed or holey smi oder double, then there is no need to scan it, because we already know it won't contain any arrays! same for typed arrays. (and ofc same for the source array itself).
I updated the code to add the length directly for PACKED_SMI_ELEMENTS and PACKED_DOUBLE_ELEMENTS without scanning. I excluded holey variants because holes are skipped during flattening, so using array.length would overcount the actual elements.
Cast<FastJSArray>(element) otherwise goto Bailout;Here (and probably most similar casts) you could use `Cast<FastJSArrayForRead>` instead.
I've updated the internal sub-array casts and traversal variables to use FastJSArrayForRead. The entry parameter remains FastJSArrayForCopy for compatibility with the caller.
if (allDouble && !Is<Number>(element)) allDouble = false;This will convert SMI arrays to DOUBLE arrays. I don't think this is what we want.
This is resolved by the targetKind refactoring mentioned in #37. An all-Smi input now correctly produces a PACKED_SMI_ELEMENTS result.
I find it a bit confusing that this case does not use the NewFlatVector helper. It took me a while to understand why we have this, and the ToDouble conversion case in the helper.
The double path writes directly to a FixedDoubleArray to avoid an intermediate FixedArray allocation and copy, which is why it bypasses FlatVector. To make this clearer, I simplified FlatVector by removing type tracking fields and removed the now-unreachable ToDouble conversion branch.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Thanks, this looks a lot simpler now. This is going in the right direction.
However, if I understand the changes correctly, we will almost never generate smi/double elements. Basically `[[1]].flat()` will see PACKED_ELEMENTS on the outer array and already generalize to the max.
My suggestion was actually to only look at the elements kind of leaf arrays (i.e. arrays which do not contain more arrays).
I tried to add some suggested edits on how I would fix this. No guarantees that it actually works 😊
To make sure it does work, let's actually add some mjsunit tests. E.g. something like:
```
assertSmiElementsKind([1].flat());
assertSmiElementsKind([[1]].flat());
assertSmiElementsKind([[1],[1]].flat());
assertDoubleElementsKind([1.1].flat());
assertDoubleElementsKind([[1.1]].flat());
assertDoubleElementsKind([[1.1],[[1.1]]].flat());
assertObjectElementsKind([[1],[[1.1]]].flat());
let generic = [{}];
generic[0] = 1;
assertObjectElementsKind(generic.flat());
assertObjectElementsKind([generic].flat());
etc...
```
// Order: PACKED_SMI_ELEMENTS < PACKED_DOUBLE_ELEMENTS < PACKED_ELEMENTS.please add a static_assert for that.
if (a == ElementsKind::PACKED_DOUBLE_ELEMENTS ||I would generalize `(SMI v DOUBLE)` to `PACKED`. We typically do so, otherwise e.g., `[huge-smi-array, [1.1]].flat()` doubles the memory consumption.
GetPackedElementsKind(source.map.elements_kind);```suggestion
GetPackedElementsKind(source.map.elements_kind);
let leafArrayNeedsElementsKindUpdate = false;
if (subKind == ElementsKind::PACKED_ELEMENTS) {
leafArrayNeedsElementsKindUpdate = true;
targetKind = ElementsKind::PACKED_SMI_ELEMENTS;
}
```
let currentDepth: Smi = depth;I would also add the packed elements kind check from line 115 here to have a fast-case for users passing already flat numeric arrays to the builtin.
targetKind = GetMoreGeneralElementsKind(targetKind, subKind);```suggestion
if (subKind == ElementsKind::PACKED_ELEMENTS) {
leafArrayNeedsElementsKindUpdate = true;
subKind = ElementsKind::PACKED_SMI_ELEMENTS;
}
targetKind = GetMoreGeneralElementsKind(targetKind, subKind);
```
}```suggestion
} else if (!IsNumber(element)) {
targetKind = GetMoreGeneralElementsKind(targetKind,
ElementsKind::PACKED_ELEMENTS);
} else if (TaggedIsSmi(element)) {
targetKind = GetMoreGeneralElementsKind(targetKind,
ElementsKind::PACKED_DOUBLE_ELEMENTS);
}
```
if (stack.length == 0) break;```suggestion
if (leafArrayNeedsElementsKindUpdate) {
const rawKind: ElementsKind = elementArray.map.elements_kind;
const subKind: ElementsKind = GetPackedElementsKind(rawKind);
targetKind = GetMoreGeneralElementsKind(targetKind, subKind);
}
leafArrayNeedsElementsKindUpdate = false;
if (stack.length == 0) break;
```
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
```suggestion
} else if (!IsNumber(element)) {
targetKind = GetMoreGeneralElementsKind(targetKind,
ElementsKind::PACKED_ELEMENTS);
} else if (TaggedIsSmi(element)) {
targetKind = GetMoreGeneralElementsKind(targetKind,
ElementsKind::PACKED_DOUBLE_ELEMENTS);
}
```
`!TaggedIsSmi` of course
Thanks, this looks a lot simpler now. This is going in the right direction.
However, if I understand the changes correctly, we will almost never generate smi/double elements. Basically `[[1]].flat()` will see PACKED_ELEMENTS on the outer array and already generalize to the max.
My suggestion was actually to only look at the elements kind of leaf arrays (i.e. arrays which do not contain more arrays).
I tried to add some suggested edits on how I would fix this. No guarantees that it actually works 😊
To make sure it does work, let's actually add some mjsunit tests. E.g. something like:
```
assertSmiElementsKind([1].flat());
assertSmiElementsKind([[1]].flat());
assertSmiElementsKind([[1],[1]].flat());
assertDoubleElementsKind([1.1].flat());
assertDoubleElementsKind([[1.1]].flat());
assertDoubleElementsKind([[1.1],[[1.1]]].flat());
assertObjectElementsKind([[1],[[1.1]]].flat());
let generic = [{}];
generic[0] = 1;
assertObjectElementsKind(generic.flat());
assertObjectElementsKind([generic].flat());
etc...
```
Thank you for the detailed review and the suggested edits. I've implemented the leaf array inspection approach, but used boolean flags (seenSmi, seenDouble, seenObject) instead of the merge-based approach in the suggested edits.
The reason is that resetting PACKED_ELEMENTS sub-array kind to SMI as a placeholder causes incorrect merges at deeper nesting levels — for example, [[[1.1],[2.2]]].flat(2) would produce PACKED_ELEMENTS instead of PACKED_DOUBLE_ELEMENTS because the placeholder SMI gets merged with the first DOUBLE sub-array.
The boolean flags approach avoids this by tracking seen types directly.
All the test cases you suggested are added, except for assertDoubleElementsKind([[1.1],[[1.1]]].flat()) — the default flat depth is 1, so the result is [1.1, [1.1]] which contains an array reference, meaning PACKED_DOUBLE_ELEMENTS is not achievable. Was this intended for a different input or a depth of 2?
// Order: PACKED_SMI_ELEMENTS < PACKED_DOUBLE_ELEMENTS < PACKED_ELEMENTS.please add a static_assert for that.
The macro GetMoreGeneralElementsKind was removed in this patchset since the boolean flags approach replaces the merge logic entirely, so there is no longer a place where the static_assert would apply.
I would generalize `(SMI v DOUBLE)` to `PACKED`. We typically do so, otherwise e.g., `[huge-smi-array, [1.1]].flat()` doubles the memory consumption.
Thank you for the suggestion. The SMI+DOUBLE → PACKED_ELEMENTS behavior is preserved in the final kind determination using boolean flags. GetMoreGeneralElementsKind itself was removed since the boolean flags approach does not need a merge function.
```suggestion
GetPackedElementsKind(source.map.elements_kind);
let leafArrayNeedsElementsKindUpdate = false;
if (subKind == ElementsKind::PACKED_ELEMENTS) {
leafArrayNeedsElementsKindUpdate = true;
targetKind = ElementsKind::PACKED_SMI_ELEMENTS;
}
```
Thank you for the suggested edit. I took a slightly different approach — packed numeric sources return early with their kind, and PACKED_ELEMENTS sources fall through to the full scan using boolean flags. The merge-based initialization with a SMI placeholder caused incorrect results at deeper nesting levels (see top-level comment for details).
I would also add the packed elements kind check from line 115 here to have a fast-case for users passing already flat numeric arrays to the builtin.
Thank you for the suggestion. Implemented as an early return for PACKED_SMI_ELEMENTS and PACKED_DOUBLE_ELEMENTS sources.
targetKind = GetMoreGeneralElementsKind(targetKind, subKind);```suggestion
if (subKind == ElementsKind::PACKED_ELEMENTS) {
leafArrayNeedsElementsKindUpdate = true;
subKind = ElementsKind::PACKED_SMI_ELEMENTS;
}
targetKind = GetMoreGeneralElementsKind(targetKind, subKind);
```
Thank you for the suggested edit. I took a slightly different approach here — packed numeric sub-arrays set the corresponding boolean flag and add their length directly, while PACKED_ELEMENTS sub-arrays are descended into so their leaf elements are inspected individually. Resetting PACKED_ELEMENTS to SMI and merging it as a placeholder caused incorrect results at depth > 1, e.g. [[[1.1],[2.2]]].flat(2) produced PACKED_ELEMENTS instead of PACKED_DOUBLE_ELEMENTS.
Olivier Flückiger```suggestion
} else if (!IsNumber(element)) {
targetKind = GetMoreGeneralElementsKind(targetKind,
ElementsKind::PACKED_ELEMENTS);
} else if (TaggedIsSmi(element)) {
targetKind = GetMoreGeneralElementsKind(targetKind,
ElementsKind::PACKED_DOUBLE_ELEMENTS);
}
```
`!TaggedIsSmi` of course
Done
```suggestion
if (leafArrayNeedsElementsKindUpdate) {
const rawKind: ElementsKind = elementArray.map.elements_kind;
const subKind: ElementsKind = GetPackedElementsKind(rawKind);
targetKind = GetMoreGeneralElementsKind(targetKind, subKind);
}
leafArrayNeedsElementsKindUpdate = false;
if (stack.length == 0) break;
```
This is no longer needed with the current approach. Since PACKED_ELEMENTS sub-arrays are descended into rather than merged, there is no placeholder kind to correct during ascending.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Amemiya RiyaThanks, this looks a lot simpler now. This is going in the right direction.
However, if I understand the changes correctly, we will almost never generate smi/double elements. Basically `[[1]].flat()` will see PACKED_ELEMENTS on the outer array and already generalize to the max.
My suggestion was actually to only look at the elements kind of leaf arrays (i.e. arrays which do not contain more arrays).
I tried to add some suggested edits on how I would fix this. No guarantees that it actually works 😊
To make sure it does work, let's actually add some mjsunit tests. E.g. something like:
```
assertSmiElementsKind([1].flat());
assertSmiElementsKind([[1]].flat());
assertSmiElementsKind([[1],[1]].flat());
assertDoubleElementsKind([1.1].flat());
assertDoubleElementsKind([[1.1]].flat());
assertDoubleElementsKind([[1.1],[[1.1]]].flat());
assertObjectElementsKind([[1],[[1.1]]].flat());
let generic = [{}];
generic[0] = 1;
assertObjectElementsKind(generic.flat());
assertObjectElementsKind([generic].flat());
etc...
```
Thank you for the detailed review and the suggested edits. I've implemented the leaf array inspection approach, but used boolean flags (seenSmi, seenDouble, seenObject) instead of the merge-based approach in the suggested edits.
The reason is that resetting PACKED_ELEMENTS sub-array kind to SMI as a placeholder causes incorrect merges at deeper nesting levels — for example, [[[1.1],[2.2]]].flat(2) would produce PACKED_ELEMENTS instead of PACKED_DOUBLE_ELEMENTS because the placeholder SMI gets merged with the first DOUBLE sub-array.
The boolean flags approach avoids this by tracking seen types directly.
All the test cases you suggested are added, except for assertDoubleElementsKind([[1.1],[[1.1]]].flat()) — the default flat depth is 1, so the result is [1.1, [1.1]] which contains an array reference, meaning PACKED_DOUBLE_ELEMENTS is not achievable. Was this intended for a different input or a depth of 2?
Was this intended for a different input or a depth of 2?
yes, of course.
lgtm, thanks for the additional improvements.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |