IsUndefined and IsNull segfault when compiled without V8_ENABLE_CHECKS

96 views
Skip to first unread message

Jeroen Ooms

unread,
Dec 23, 2019, 5:42:48 PM12/23/19
to v8-dev
Hi!

I maintain v8 bindings for the R programming language. As of recently (I think 7.9) we started getting segfaults at calls to IsUndefined() and IsNull(). The problem has gotten more prevalent in 8.1. We're encountered this both on MacOS and Arch Linux.

To produce a minimal example, simply take the official hello-world.cc and add something like:

      if(result->IsUndefined()){
        printf("value is undefined!");
      }

Attached a full sample program. Same problem happens for IsNull() and IsNullOrUndefined(). It does not crash when we compile with -DV8_ENABLE_CHECKS which enables an alternative implementation of IsUndefined.


hello-crash.cc

Ben Noordhuis

unread,
Dec 26, 2019, 4:36:50 AM12/26/19
to v8-...@googlegroups.com
Your test case looks okay to me. With what specific version(s) are you
seeing this, does it also reproduce with a debug build of V8 and what
does `result` contain when you inspect it in gdb or lldb? What does
the backtrace look like in the debug build?

Jeroen Ooms

unread,
Dec 26, 2019, 8:53:47 AM12/26/19
to v8-...@googlegroups.com
Thanks. I'm working from the master branch now (but I think the bug was introduced around 7.9). It crashes here:

Process 25330 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x708040ef8)
    frame #0: 0x0000000100001145 a.out`main [inlined] v8::internal::Internals::GetInstanceType(obj=47996394545225) at v8-internal.h:233:12
   230   V8_INLINE static int GetInstanceType(const internal::Address obj) {
   231     typedef internal::Address A;
   232     A map = ReadTaggedPointerField(obj, kHeapObjectMapOffset);
-> 233     return ReadRawField<uint16_t>(map, kMapInstanceTypeOffset);
   234   }
   235
   236   V8_INLINE static int GetOddballKind(const internal::Address obj) {
Target 0: (a.out) stopped.

(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x708040ef8)
  * frame #0: 0x0000000100001145 a.out`main [inlined] v8::internal::Internals::GetInstanceType(obj=47996394545225) at v8-internal.h:233:12
    frame #1: 0x00000001000010e4 a.out`main [inlined] v8::Value::QuickIsUndefined(this=0x0000000104857860) const at v8.h:11310
    frame #2: 0x00000001000010a0 a.out`main [inlined] v8::Value::IsUndefined(this=0x0000000104857860) const at v8.h:11301
    frame #3: 0x00000001000010a0 a.out`main(argc=1, argv=0x00007ffeefbff988) at hello-crash.cc:53
    frame #4: 0x00007fff667df7fd libdyld.dylib`start + 1
    frame #5: 0x00007fff667df7fd libdyld.dylib`start + 1


Ben Noordhuis

unread,
Dec 29, 2019, 6:00:17 AM12/29/19
to v8-...@googlegroups.com
obj=47996394545225 in frame #0 is 0x2ba7080c2049, which looks like a
valid heap object (heap objects have bit 0 set; if it's clear, it's a
tagged integer.)

It crashes when trying to read the heap object's map (a.k.a. hidden
class), the meta-object that describes its "shape." Is it possible
that you're compiling your test program with a different v8.h than the
one V8 itself was built with? My first hunch is that the offsets into
the object header somehow don't match up.

How are you building V8 and how are you compiling and linking the test program?

Jeroen Ooms

unread,
Dec 29, 2019, 6:29:17 AM12/29/19
to v8-...@googlegroups.com
So you cannot reproduce this crash on the v8 master branch? I'm
surprised because one of the arch linux users has reported exactly the
same crash that I see on MacOS, so it really seemed like a bug in v8.
We both use the system clang/libcxx, not the custom ones.

I use this homebrew recipe to build v8 on MacOS:
https://github.com/jeroen/homebrew-dev/blob/master/Formula/v8.rb . If
you have homebrew you can install it like this:

brew tap jeroen/dev
brew install jeroen/dev/v8

And then compile the example program:

clang++ -std=c++11 hello-crash.cc -I/usr/local/opt/v8/libexec
-I/usr/local/opt/v8/libexec/include -L/usr/local/opt/v8/libexec -lv8
-lv8_libplatform

On Arch we use this to build v8:
https://github.com/JanMarvin/v8-R/blob/master/PKGBUILD

Dominik Inführ

unread,
Dec 30, 2019, 4:06:04 AM12/30/19
to v8-dev
I suppose this is because pointer compression was recently enabled by default. So either you disable pointer compression when compiling V8 (v8_enable_pointer_compression = false in your args.gn) or you need to compile your example program with -DV8_COMPRESS_POINTERS. This fixed the issue for me at least. Hope that helps!

Jeroen Ooms

unread,
Dec 30, 2019, 5:41:06 AM12/30/19
to v8-...@googlegroups.com
On Mon, Dec 30, 2019 at 10:06 AM Dominik Inführ <dinf...@chromium.org> wrote:
>
> I suppose this is because pointer compression was recently enabled by default. So either you disable pointer compression when compiling V8 (v8_enable_pointer_compression = false in your args.gn) or you need to compile your example program with -DV8_COMPRESS_POINTERS. This fixed the issue for me at least. Hope that helps!

Thanks! This indeed fixes the example program, so your diagnosis seems
correct. However this does not provide a general solution for
applications that link against a system distributed version of v8.

So if I understand it correctly, the application is supposed to be
built with V8_COMPRESS_POINTERS that matches what v8 itself was built
with, but there is no way of knowing the value for
v8_enable_pointer_compression in a preinstalled v8? Most other C++
libraries would ship a config.h file that gets included with v8.h
which contains such macros.

The R bindings that I maintain should work against any system provided
libv8, on recent versions of Debian, Fedora, MacOS, etc. If I don't
control the v8 version on those systems, is there anything the
bindings' configure script can query to know the appropriate
V8_COMPRESS_POINTERS config?

Jeroen Ooms

unread,
Dec 30, 2019, 5:51:24 AM12/30/19
to v8-...@googlegroups.com
On Mon, Dec 30, 2019 at 10:06 AM Dominik Inführ <dinf...@chromium.org> wrote:
>
> I suppose this is because pointer compression was recently enabled by default. So either you disable pointer compression when compiling V8 (v8_enable_pointer_compression = false in your args.gn) or you need to compile your example program with -DV8_COMPRESS_POINTERS. This fixed the issue for me at least. Hope that helps!

By the way it looks like the same problem is reported on:
https://bugs.chromium.org/p/v8/issues/detail?id=10041

Leszek Swirski

unread,
Jan 7, 2020, 2:36:47 AM1/7/20
to v8-dev, Igor Sheludko
Interesting, thanks for the report, sorry for not replying earlier but you know, Christmas and all that :). Could you please create a bug on crbug.com/v8 and assign it to ish...@chromium.org (and cc me)?

+Igor Sheludko, could this be pointer compression root inference? We might have to copy & paste a bunch of code into the api header...

On Mon, Dec 23, 2019 at 11:42 PM Jeroen Ooms <jeroe...@gmail.com> wrote:
--
--
v8-dev mailing list
v8-...@googlegroups.com
http://groups.google.com/group/v8-dev
---
You received this message because you are subscribed to the Google Groups "v8-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to v8-dev+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/v8-dev/216817a3-eeb0-4b92-a77a-bf502fcb7886%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages