[v6.6] INFO: task hung in sync_bdevs

3 views
Skip to first unread message

syzbot

unread,
Jul 9, 2025, 6:03:34 AM7/9/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: a5df3a702b2c Linux 6.6.96
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1410ff70580000
kernel config: https://syzkaller.appspot.com/x/.config?x=2632deddafa957e8
dashboard link: https://syzkaller.appspot.com/bug?extid=e018c055b851f1ec384e
compiler: Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/9ba5b19f9f4d/disk-a5df3a70.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/2cd779015729/vmlinux-a5df3a70.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b56fc39e5cb8/bzImage-a5df3a70.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e018c0...@syzkaller.appspotmail.com

INFO: task syz.3.874:8412 blocked for more than 143 seconds.
Not tainted 6.6.96-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.874 state:D stack:25480 pid:8412 ppid:5794 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x14e2/0x4580 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6833
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
sync_bdevs+0x1af/0x330 block/bdev.c:1059
ksys_sync+0xba/0x150 fs/sync.c:105
__ia32_sys_sync+0xe/0x20 fs/sync.c:113
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fc208d8e929
RSP: 002b:00007fc209c32038 EFLAGS: 00000246 ORIG_RAX: 00000000000000a2
RAX: ffffffffffffffda RBX: 00007fc208fb5fa0 RCX: 00007fc208d8e929
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 00007fc208fb5fa0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fc208fb5fa0 R15: 00007ffed3c57d48
</TASK>

Showing all locks held in the system:
2 locks held by kworker/0:0/8:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffff8880b8e288c8 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x39c/0x6d0 kernel/sched/psi.c:998
2 locks held by kworker/0:1/9:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900000e7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900000e7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
1 lock held by khungtaskd/29:
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2f760 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by getty/5554:
#0: ffff88803251a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc900015c02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
1 lock held by udevd/6035:
#0: ffff8880220034c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
1 lock held by syz.3.874/8412:
#0: ffff8880220034c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz-executor/9858:
#0: ffffffff8cd35738 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#0: ffffffff8cd35738 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by syz.0.2296/12158:
#0: ffff8880220034c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
4 locks held by syz-executor/12591:
#0: ffff88805be1ce30 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close net/bluetooth/hci_core.c:503 [inline]
#0: ffff88805be1ce30 (&hdev->req_lock){+.+.}-{3:3}, at: hci_unregister_dev+0x1fe/0x500 net/bluetooth/hci_core.c:2683
#1: ffff88805be1c078 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_close_sync+0x4c9/0xfb0 net/bluetooth/hci_sync.c:5215
#2: ffffffff8e128c28 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1980 [inline]
#2: ffffffff8e128c28 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xa1/0x220 net/bluetooth/hci_conn.c:2539
#3: ffff88802e103b38 (&conn->lock#2){+.+.}-{3:3}, at: l2cap_conn_del+0x70/0x660 net/bluetooth/l2cap_core.c:1762
1 lock held by syz.2.2467/12593:
#0: ffffffff8cd35738 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#0: ffffffff8cd35738 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.6.96-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 9858 Comm: syz-executor Not tainted 6.6.96-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:check_preemption_disabled+0x47/0x110 lib/smp_processor_id.c:55
Code: 95 75 65 8b 0d c2 3c 95 75 f7 c1 ff ff ff 7f 74 1f 65 48 8b 0c 25 28 00 00 00 48 3b 4c 24 08 0f 85 c4 00 00 00 48 83 c4 10 5b <41> 5e 41 5f 5d c3 48 c7 04 24 00 00 00 00 9c 8f 04 24 f7 04 24 00
RSP: 0018:ffffc90004457600 EFLAGS: 00000082
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0eb4c6e692410100
RDX: 0000000000000000 RSI: ffffffff8afc6ce0 RDI: ffffffff8afc6ca0
RBP: ffffc90004457750 R08: ffffffff8e4a7faf R09: 1ffffffff1c94ff5
R10: dffffc0000000000 R11: fffffbfff1c94ff6 R12: ffffffff8426951f
R13: dffffc0000000000 R14: ffffffff9716de98 R15: 1ffff9200088aed4
FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000110c38fb03 CR3: 000000000cb30000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
rcu_dynticks_curr_cpu_in_eqs include/linux/context_tracking.h:122 [inline]
rcu_is_watching+0x15/0xb0 kernel/rcu/tree.c:700
trace_lock_release include/trace/events/lock.h:69 [inline]
lock_release+0xba/0x8b0 kernel/locking/lockdep.c:5765
__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:149 [inline]
_raw_spin_unlock_irqrestore+0x71/0x110 kernel/locking/spinlock.c:194
__debug_check_no_obj_freed lib/debugobjects.c:999 [inline]
debug_check_no_obj_freed+0x51f/0x540 lib/debugobjects.c:1020
free_pages_prepare mm/page_alloc.c:1160 [inline]
free_unref_page_prepare+0x1de/0x8e0 mm/page_alloc.c:2336
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
vfree+0x1a6/0x320 mm/vmalloc.c:2860
kcov_put kernel/kcov.c:438 [inline]
kcov_close+0x2b/0x50 kernel/kcov.c:534
__fput+0x234/0x970 fs/file_table.c:384
task_work_run+0x1ce/0x250 kernel/task_work.c:239
exit_task_work include/linux/task_work.h:43 [inline]
do_exit+0x90b/0x23c0 kernel/exit.c:883
do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
get_signal+0x12fc/0x1400 kernel/signal.c:2902
arch_do_signal_or_restart+0x96/0x780 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f72f59c11e5
Code: Unable to access opcode bytes at 0x7f72f59c11bb.
RSP: 002b:00007ffed81f3520 EFLAGS: 00000293 ORIG_RAX: 00000000000000e6
RAX: fffffffffffffdfc RBX: 0000000000000256 RCX: 00007f72f59c11e5
RDX: 00007ffed81f3560 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 00007ffed81f35cc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000001388
R13: 00000000000927c0 R14: 0000000000050032 R15: 00007ffed81f3620
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Oct 18, 2025, 5:18:37 PM10/18/25
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 0bbbd97a442d Linux 6.6.112
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12c4e492580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=e018c055b851f1ec384e
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15009de2580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1754ab04580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/7e387d57d751/disk-0bbbd97a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/810d42505365/vmlinux-0bbbd97a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/5d2fe115c227/bzImage-0bbbd97a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e018c0...@syzkaller.appspotmail.com

INFO: task syz.0.44:6011 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.44 state:D stack:27592 pid:6011 ppid:5922 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
sync_bdevs+0x1af/0x330 block/bdev.c:1059
ksys_sync+0xba/0x150 fs/sync.c:105
__ia32_sys_sync+0xe/0x20 fs/sync.c:113
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f289c98efc9
RSP: 002b:00007ffc3895b4c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a2
RAX: ffffffffffffffda RBX: 00007f289cbe5fa0 RCX: 00007f289c98efc9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f289cbe5fa0 R14: 00007f289cbe5fa0 R15: 0000000000000000
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by kworker/u4:8/3430:
2 locks held by getty/5555:
#0: ffff8880315120a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
3 locks held by kworker/1:3/5875:
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90003297d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003297d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88802bfa9240 (&data->fib_lock){+.+.}-{3:3}, at: nsim_fib_event_work+0x26c/0x3170 drivers/net/netdevsim/fib.c:1491
1 lock held by udevd/5932:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
1 lock held by syz.0.44/6011:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.1.45/6038:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.2.46/6058:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.3.47/6078:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.4.48/6106:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.5.49/6138:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.6.50/6164:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.7.51/6190:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.8.52/6217:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059
1 lock held by syz.9.53/6249:
#0: ffff88806212f4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1af/0x330 block/bdev.c:1059

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 1087 Comm: kworker/u4:6 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:memset_orig+0x0/0xac arch/x86/lib/memset_64.S:46
Code: 88 1f c3 cc cc cc cc cc cc cc f3 0f 1e fa eb 1a 49 89 f9 40 88 f0 48 89 d1 f3 aa 4c 89 c8 c3 90 90 90 90 90 90 90 90 90 90 90 <66> 0f 1f 00 49 89 fa 40 0f b6 ce 48 b8 01 01 01 01 01 01 01 01 48
RSP: 0018:ffffc9000450f230 EFLAGS: 00000202
RAX: ffffc9000450f401 RBX: ffffc9000450f340 RCX: ffffffff813ab908
RDX: 0000000000000010 RSI: 0000000000000000 RDI: ffffc9000450f358
RBP: ffffc9000450f358 R08: ffffc9000450f367 R09: 1ffff920008a1e6c
R10: dffffc0000000000 R11: fffff520008a1e6d R12: ffffc9000450f308
R13: dffffc0000000000 R14: ffffffff81e5f9c7 R15: ffffffff8ed14b48
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055557c49e808 CR3: 000000000cb30000 CR4: 00000000003506e0
Call Trace:
<TASK>
unwind_next_frame+0x1648/0x2970 arch/x86/kernel/unwind_orc.c:592
arch_stack_walk+0x144/0x190 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122
save_stack+0xf7/0x1f0 mm/page_owner.c:128
__set_page_owner+0x1d/0x60 mm/page_owner.c:192
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1cd/0x210 mm/page_alloc.c:1554
prep_new_page mm/page_alloc.c:1561 [inline]
get_page_from_freelist+0x195c/0x19f0 mm/page_alloc.c:3191
__alloc_pages+0x1e3/0x460 mm/page_alloc.c:4457
alloc_slab_page+0x5d/0x170 mm/slub.c:1881
allocate_slab mm/slub.c:2028 [inline]
new_slab+0x87/0x2e0 mm/slub.c:2081
___slab_alloc+0xc6d/0x1300 mm/slub.c:3253
__slab_alloc mm/slub.c:3339 [inline]
__slab_alloc_node mm/slub.c:3392 [inline]
slab_alloc_node mm/slub.c:3485 [inline]
__kmem_cache_alloc_node+0x1a2/0x260 mm/slub.c:3534
__do_kmalloc_node mm/slab_common.c:1006 [inline]
__kmalloc_node_track_caller+0xa2/0x230 mm/slab_common.c:1027
kmalloc_reserve+0x117/0x260 net/core/skbuff.c:581
__alloc_skb+0x138/0x2c0 net/core/skbuff.c:650
alloc_skb include/linux/skbuff.h:1284 [inline]
nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
nsim_dev_trap_report_work+0x293/0xb00 drivers/net/netdevsim/dev.c:851
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Reply all
Reply to author
Forward
0 new messages