Syzkaller repeated “fork not possible” failures after long-running iterations

1 view
Skip to first unread message

Saurabh Sahu

unread,
Apr 27, 2026, 3:11:11 AM (3 days ago) Apr 27
to syzkaller
Hi syzkaller folks,

I’m looking for advice on a memory/fork issue during long-running fuzzing.

Setup:
- syzkaller for Linux kernel fuzzing
- VM assigned memory: 2.7 GB
- `free -h` often shows ~1 GB free (rest used by kernel/cache)
- `procs=1`
- I have an automation script that starts the next fuzzing iteration automatically after each iteration completes

Issue:
Initially fuzzing runs fine, but after some number of iterations I start hitting:

  fork not possible
  cannot allocate memory

and fuzzing eventually stalls.

Because the VM has 2.7 GB assigned, I’m trying to understand whether this is:
1. expected memory pressure / Linux overcommit causing fork failure
2. syz-executor or related resource accumulation across iterations
3. kernel slab/kmalloc growth triggered by fuzzing
4. insufficient cleanup between iterations in my automation
5. something else specific to syzkaller on long runs

Questions:
- Has anyone seen this pattern before?
- Does this sound like an operational memory issue or possible leakage?
- Are there recommended low-memory or long-run tuning changes for this?

I’m considering/testing:
- enabling swap
- `target_reboot=true`
- periodic reboot every N iterations
- dropping caches between iterations
- checking `vm.overcommit_memory`
- disabling or reducing expensive fuzzing features if needed

Also:
- Is there a preferred way to distinguish executor/resource buildup from kernel memory pressure?
- Are periodic reboots considered normal for constrained-memory syzkaller setups?

If useful I can share:
- syzkaller config
- dmesg/OOM logs
- slabtop output
- memory stats during failure
- automation script logic

Any guidance would be greatly appreciated.

Thanks!
Reply all
Reply to author
Forward
0 new messages