Huge minidumps on Apple Silicon

16 views
Skip to first unread message

Jonathan Jones

unread,
Mar 24, 2026, 9:34:55 AMMar 24
to Crashpad-dev
Hi folks,

After making a code change in our product (not Crashpad or crash handling), Crashpad is generating minidumps larger than what should be possible from the crashing process ... minidumps well in excess of 1 GB (with indirect memory capture disabled).

Early analysis shows threads having overlapping address ranges, resulting in massive duplication in the minidump.

Has anyone else reported this?  We used to see this only on macOS Intel, but we dropped support for our product on that platform.  This is the first time we are seeing this on Apple Silicon.

Best wishes,
Jonathan Jones

Jonathan Jones

unread,
Mar 26, 2026, 8:49:39 AMMar 26
to Crashpad-dev, Jonathan Jones
We have more information.  We changed our app from a vanilla executable to an NSApplication bundle (but we did not directly change how threads are created or spawned).  This seems to have resulted in the following underlying change.

Before, threads are created via pthread_create. Each pthread gets its own VM allocation with guard pages. When Crashpad calls mach_vm_region() to find the stack bounds for a thread's SP, each thread maps to a different VM region with a unique end address. Result: ~20 KB captured per thread.

After the change, the app is an NSApplication that links AppKit.framework, which brings in GCD/libdispatch. GCD manages thread pools by allocating stacks from shared VM regions — multiple GCD worker threads, dispatch queues, NSEventThread, etc. all have their stack pointers within the same large VM region. When Crashpad calls mach_vm_region() for each thread's SP, it gets the same region end address for all threads in that group.

The result, each captured thread stack region runs from the start of the thread's address space all the way to the end of the large shared VM region, resulting in massive overlap in captured memory between the threads.  I patched Crashpad to truncate the overlap, but the memory captured per-thread is still fundamentally too large ... it's not stopping capture at the end of the threads stack space.

Jonathan Jones

unread,
Apr 14, 2026, 5:25:58 PM (7 hours ago) Apr 14
to Crashpad-dev, Jonathan Jones
As an update, we've observed some weird behaviors in virtual memory when these large minidumps are generated.

I added instrumentation to Crashpad to run "vmmap" before capturing the stack regions.  Here is an example of a "normal" looking vmmap (output heavily edited):

$ /usr/bin/vmmap -interleaved -submaps -noCoalesce <pid>
REGION TYPE       START - END         [ VSIZE  RSDNT  DIRTY   SWAP] PRT/MAX SHRMOD PURGE    REGION DETAIL
STACK GUARD    167d74000-16b578000    [ 56.0M     0K     0K     0K] ---/rwx SM=NUL          stack guard for thread 0
Stack          16b578000-16bd74000    [ 8176K   160K   160K     0K] rw-/rwx SM=PRV          thread 0
STACK GUARD    16bd74000-16bd78000    [   16K     0K     0K     0K] ---/rwx SM=NUL          stack guard for thread 19
Stack          16bd78000-16be00000    [  544K    48K    16K     0K] rw-/rwx SM=PRV          thread 19
STACK GUARD    16be00000-16be04000    [   16K     0K     0K     0K] ---/rwx SM=NUL          stack guard for thread 18
Stack          16be04000-16be8c000    [  544K    48K    48K     0K] rw-/rwx SM=PRV          thread 18


Observations:
  • The "REGION TYPE" is "STACK GUARD" for the guard regions.
  • The PRT (permissions) column shows no permissions for the guard regions.
  • The SHRMOD (share mode) of NUL (empty) for the guard regions.
  • Except for the first guard, VSIZE=16K (exactly one page of virtual memory).
  • The main thread VSIZE=8176K.
  • Secondary threads VSIZE=544K.
Here is a vmmap example when a large minidump is generated:

REGION TYPE            START - END         [ VSIZE  RSDNT  DIRTY   SWAP] PRT/MAX SHRMOD PURGE    REGION DETAIL
Stack (reserved)    16bde8000-16f5ec000    [ 56.0M     0K     0K     0K] r--/rwx SM=NUL          reserved VM address space (unallocated)
Stack               16f5ec000-16fde8000    [ 8176K  8176K  8176K     0K] rw-/rwx SM=PRV          thread 0
Stack               16fe74000-16fe78000    [   16K    16K    16K     0K] r--/rwx SM=PRV
Stack               16fe78000-16ff00000    [  544K   544K   544K     0K] rw-/rwx SM=PRV          thread 4
Stack               16ff00000-16ff04000    [   16K    16K    16K     0K] r--/rwx SM=PRV
Stack               16ff04000-16ff8c000    [  544K   544K   544K     0K] rw-/rwx SM=COW          thread 4
Stack               16ff8c000-16ff90000    [   16K    16K    16K     0K] r--/rwx SM=PRV
Stack               16ff90000-170018000    [  544K   544K   544K     0K] rw-/rwx SM=COW          thread 4
Stack               170018000-17001c000    [   16K    16K    16K     0K] r--/rwx SM=PRV
Stack               17001c000-1700a4000    [  544K   544K   544K     0K] rw-/rwx SM=PRV          thread 4
Stack               170130000-170134000    [   16K    16K    16K     0K] r--/rwx SM=PRV

 
Observations:
  • The "REGION TYPE" for the guard regions appear to be either "Stack" or "Stack (reserved)", not "STACK GUARD".
  • The PRT (permissions) column shows the supposed guard regions as readable.
  • The SHRMOD (share mode) of PRV (private) for the supposed guard regions.
  • Except for the first guard, VSIZE=16K (exactly one page of virtual memory).
  • The main thread VSIZE=8176K.
  • Regions of 544K between the 16K regions.
Even vmmap itself seems to be confused, thinking a bunch of regions belong to thread 4, when they are likely all different threads.

I was able to work around this in Crashpad by treating read-only memory regions as not part of the stack (a stack region should be read-write and not just read-only).  This seems to restore minidumps to their normal size of a few megs, and after examining a couple crashes generated this way, it doesn't seem to corrupt the stack.
Reply all
Reply to author
Forward
0 new messages