Port LGTM
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
PPC/s390: [wasmfx] Implement resume_throw
Port 4522793ec3278e1af5581adf8f4b8a255ada03ec
Original Commit Message:
The resume_throw instruction creates an exception using the current
stack's operands, resumes the target continuation and immediately throws
the exception from the top of that stack.
Like other stack switching instructions, this is implemented with a
builtin that saves the current register state and restores the target
state. But instead of restoring the saved PC, the builtin throws the
exception.
The builtin also handles the special case where the target continuation
was just created with cont.new and has not been started yet. The
corresponding stack is empty and should only be entered with the stack
entry wrapper. Do not switch in this case, and throw the exception from
the current stack instead which has the same observable behavior.
R=thib...@chromium.org, jun...@ibm.com
BUG=
LOG=N
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
__ LoadU64(scratch, MemOperand(target_stack, wasm::kStackFpOffset));Hi Thibaud,
Is it ok if we are doing a 64-bit load here then doing a 32-bit compare with `CmpSmiLiteral` in pointer compressed builds? or Should we do a CmpU64 regardless instead?
arm64 seems to do a 64-bit load and compare regardless and x64 uses cmp_tagged which does a 32 bit direct memory compare on pnt compressed and 64-bit on non pntr compressed builds.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
__ LoadU64(scratch, MemOperand(target_stack, wasm::kStackFpOffset));Hi Thibaud,
Is it ok if we are doing a 64-bit load here then doing a 32-bit compare with `CmpSmiLiteral` in pointer compressed builds? or Should we do a CmpU64 regardless instead?arm64 seems to do a 64-bit load and compare regardless and x64 uses cmp_tagged which does a 32 bit direct memory compare on pnt compressed and 64-bit on non pntr compressed builds.
Hi Milad,
This is a raw stack address so this should be a full pointer comparison regardless of pointer compression. It seems that I made a mistake here, and doing a Smi comparison is either misleading or incorrect depending on the platform. I'll upload a fix.
Thanks for pointing that out!
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
__ LoadU64(scratch, MemOperand(target_stack, wasm::kStackFpOffset));Thibaud MichaudHi Thibaud,
Is it ok if we are doing a 64-bit load here then doing a 32-bit compare with `CmpSmiLiteral` in pointer compressed builds? or Should we do a CmpU64 regardless instead?arm64 seems to do a 64-bit load and compare regardless and x64 uses cmp_tagged which does a 32 bit direct memory compare on pnt compressed and 64-bit on non pntr compressed builds.
Hi Milad,
This is a raw stack address so this should be a full pointer comparison regardless of pointer compression. It seems that I made a mistake here, and doing a Smi comparison is either misleading or incorrect depending on the platform. I'll upload a fix.
Thanks for pointing that out!
Thank you for checking and creating a patch, I'll also port http://crrev.com/c/7415121 once it lands.
| Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |