18楼梦想改造家
unread,Sep 2, 2025, 3:12:47 AM (5 days ago) Sep 2Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to v8-dev
Hi everyone,
I want to study the implementation details of V8’s CodeStubAssembler (CSA). So I built the simplest code sample as follows:
``` js
function add(a, b) {
return a + b;
}
%PrepareFunctionForOptimization(add);
add(0x4141, 0x4000);
add(0x4242, 0x4001);
```
By reading the source code, I traced this down to Generate_AddWithFeedback, which gets embedded into AddHandler. Since the above values are added as Smi, it eventually calls the TrySmiAdd function:
``` c++
Comment("perform smi operation");
// If rhs is known to be an Smi we want to fast path Smi operation. This
// is for AddSmi operation. For the normal Add operation, we want to fast
// path both Smi and Number operations, so this path should not be marked
// as Deferred.
TNode<Smi> rhs_smi = CAST(rhs);
Label if_overflow(this,
rhs_known_smi ? Label::kDeferred : Label::kNonDeferred);
TNode<Smi> smi_result = TrySmiAdd(lhs_smi, rhs_smi, &if_overflow); // [+] @a
```
But the implementation of TrySmiAdd really confuses me:
``` c++
if (SmiValuesAre32Bits()) { // [+] hit here since v8_enable_pointer_compression = false
return BitcastWordToTaggedSigned(
TryIntPtrAdd(BitcastTaggedToWordForTagAndSmiBits(lhs),
BitcastTaggedToWordForTagAndSmiBits(rhs), if_overflow));
}
```
For easier debugging and clearer memory layout, I disabled pointer compression (v8_enable_pointer_compression = false).
This means it eventually calls the TryIntPtrAdd function, and my current confusion is mostly around that function.
## TryIntPtrAdd
```
TNode<IntPtrT> CodeStubAssembler::TryIntPtrAdd(TNode<IntPtrT> a,
TNode<IntPtrT> b,
Label* if_overflow) {
TNode<PairT<IntPtrT, BoolT>> pair = IntPtrAddWithOverflow(a, b); // [+] @b
[...]
}
```
My main question is about `@b`: how exactly is IntPtrAddWithOverflow implemented?
By reading the source, I found two pieces of code related to its definition:
``` c++
V(IntPtrAddWithOverflow, PAIR_TYPE(IntPtrT, BoolT), IntPtrT, IntPtrT) \
// Basic arithmetic operations.
#define DECLARE_CODE_ASSEMBLER_BINARY_OP(name, ResType, Arg1Type, Arg2Type) \
TNode<ResType> name(TNode<Arg1Type> a, TNode<Arg2Type> b);
CODE_ASSEMBLER_BINARY_OP_LIST(DECLARE_CODE_ASSEMBLER_BINARY_OP)
#undef DECLARE_CODE_ASSEMBLER_BINARY_OP
#define DEFINE_CODE_ASSEMBLER_BINARY_OP(name, ResType, Arg1Type, Arg2Type) \
TNode<ResType> CodeAssembler::name(TNode<Arg1Type> a, TNode<Arg2Type> b) { \
return UncheckedCast<ResType>(raw_assembler()->name(a, b)); \
}
CODE_ASSEMBLER_BINARY_OP_LIST(DEFINE_CODE_ASSEMBLER_BINARY_OP)
#undef DEFINE_CODE_ASSEMBLER_BINARY_OP
```
Which after macro expansion should give us something like:
``` c++
TNode<ResType> CodeAssembler::IntPtrAddWithOverflow(TNode<IntPtrT> a, TNode<IntPtrT> b) { \
return UncheckedCast<PAIR_TYPE(IntPtrT, BoolT)>(raw_assembler()->IntPtrAddWithOverflow(a, b)); \
}
```
At this point I have two questions I cannot figure out:
1. What exactly does raw_assembler() return? I suspect it’s platform-related. where is its code
2. Where is the implementation of raw_assembler()->IntPtrAddWithOverflow(a, b) located? I tried searching the V8 codebase but couldn’t find it.
If anyone could clarify this, I’d really appreciate it. Thanks!