Hello,
syzbot found the following issue on:
HEAD commit: 9eec9a14ee10 Linux 5.15.198
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=15614d3a580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=353ae28c40b35af5
dashboard link:
https://syzkaller.appspot.com/bug?extid=9c6a0852fc04906e078f
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/6bf27c1311e7/disk-9eec9a14.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/20b573bab166/vmlinux-9eec9a14.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/f5ff7bd08305/bzImage-9eec9a14.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+9c6a08...@syzkaller.appspotmail.com
INFO: task syz-executor:17679 blocked for more than 141 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:27472 pid:17679 ppid: 1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11ef/0x43c0 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6537
rwsem_down_read_slowpath+0x548/0x9d0 kernel/locking/rwsem.c:1055
__down_read_common kernel/locking/rwsem.c:1239 [inline]
__down_read kernel/locking/rwsem.c:1252 [inline]
down_read+0x96/0x2e0 kernel/locking/rwsem.c:1500
__list_lru_init+0x7f/0xa50 mm/list_lru.c:594
alloc_super+0x76c/0x950 fs/super.c:272
sget_fc+0x2dc/0x720 fs/super.c:554
vfs_get_super fs/super.c:1167 [inline]
get_tree_nodev+0x26/0x160 fs/super.c:1202
vfs_get_tree+0x88/0x270 fs/super.c:1530
fc_mount+0x18/0xa0 fs/namespace.c:1005
mq_create_mount ipc/mqueue.c:487 [inline]
mq_init_ns+0x39d/0x510 ipc/mqueue.c:1703
create_ipc_ns ipc/namespace.c:58 [inline]
copy_ipcs+0x2e3/0x4b0 ipc/namespace.c:84
create_new_namespaces+0x212/0x6f0 kernel/nsproxy.c:90
unshare_nsproxy_namespaces+0x116/0x160 kernel/nsproxy.c:226
ksys_unshare+0x4ca/0x8b0 kernel/fork.c:3175
__do_sys_unshare kernel/fork.c:3249 [inline]
__se_sys_unshare kernel/fork.c:3247 [inline]
__x64_sys_unshare+0x34/0x40 kernel/fork.c:3247
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7fe31142b1a7
RSP: 002b:00007fff8aced348 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007fe31142b1a7
RDX: 0000000000000000 RSI: 00007fe311493f7b RDI: 0000000008000000
RBP: 00007fe3116a57b8 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000044000 R11: 0000000000000246 R12: 0000000000000008
R13: 0000000000000003 R14: 00007fff8aced5b8 R15: 0000000000000000
</TASK>
INFO: task syz-executor:17685 blocked for more than 142 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:27472 pid:17685 ppid: 1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11ef/0x43c0 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6537
rwsem_down_read_slowpath+0x548/0x9d0 kernel/locking/rwsem.c:1055
__down_read_common kernel/locking/rwsem.c:1239 [inline]
__down_read kernel/locking/rwsem.c:1252 [inline]
down_read+0x96/0x2e0 kernel/locking/rwsem.c:1500
__list_lru_init+0x7f/0xa50 mm/list_lru.c:594
alloc_super+0x76c/0x950 fs/super.c:272
sget_fc+0x2dc/0x720 fs/super.c:554
vfs_get_super fs/super.c:1167 [inline]
get_tree_nodev+0x26/0x160 fs/super.c:1202
vfs_get_tree+0x88/0x270 fs/super.c:1530
fc_mount+0x18/0xa0 fs/namespace.c:1005
mq_create_mount ipc/mqueue.c:487 [inline]
mq_init_ns+0x39d/0x510 ipc/mqueue.c:1703
create_ipc_ns ipc/namespace.c:58 [inline]
copy_ipcs+0x2e3/0x4b0 ipc/namespace.c:84
create_new_namespaces+0x212/0x6f0 kernel/nsproxy.c:90
unshare_nsproxy_namespaces+0x116/0x160 kernel/nsproxy.c:226
ksys_unshare+0x4ca/0x8b0 kernel/fork.c:3175
__do_sys_unshare kernel/fork.c:3249 [inline]
__se_sys_unshare kernel/fork.c:3247 [inline]
__x64_sys_unshare+0x34/0x40 kernel/fork.c:3247
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7faab7d3e1a7
RSP: 002b:00007ffec9b598b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007faab7d3e1a7
RDX: 0000000000000000 RSI: 00007faab7da6f7b RDI: 0000000008000000
RBP: 00007faab7fb87b8 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000044000 R11: 0000000000000246 R12: 0000000000000008
R13: 0000000000000003 R14: 00007ffec9b59b28 R15: 0000000000000000
</TASK>
INFO: task syz-executor:17688 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:27472 pid:17688 ppid: 1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5049 [inline]
__schedule+0x11ef/0x43c0 kernel/sched/core.c:6395
schedule+0x11b/0x1e0 kernel/sched/core.c:6478
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6537
rwsem_down_read_slowpath+0x548/0x9d0 kernel/locking/rwsem.c:1055
__down_read_common kernel/locking/rwsem.c:1239 [inline]
__down_read kernel/locking/rwsem.c:1252 [inline]
down_read+0x96/0x2e0 kernel/locking/rwsem.c:1500
__list_lru_init+0x7f/0xa50 mm/list_lru.c:594
alloc_super+0x76c/0x950 fs/super.c:272
sget_fc+0x2dc/0x720 fs/super.c:554
vfs_get_super fs/super.c:1167 [inline]
get_tree_nodev+0x26/0x160 fs/super.c:1202
vfs_get_tree+0x88/0x270 fs/super.c:1530
fc_mount+0x18/0xa0 fs/namespace.c:1005
mq_create_mount ipc/mqueue.c:487 [inline]
mq_init_ns+0x39d/0x510 ipc/mqueue.c:1703
create_ipc_ns ipc/namespace.c:58 [inline]
copy_ipcs+0x2e3/0x4b0 ipc/namespace.c:84
create_new_namespaces+0x212/0x6f0 kernel/nsproxy.c:90
unshare_nsproxy_namespaces+0x116/0x160 kernel/nsproxy.c:226
ksys_unshare+0x4ca/0x8b0 kernel/fork.c:3175
__do_sys_unshare kernel/fork.c:3249 [inline]
__se_sys_unshare kernel/fork.c:3247 [inline]
__x64_sys_unshare+0x34/0x40 kernel/fork.c:3247
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f49930d51a7
RSP: 002b:00007ffefe273638 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f49930d51a7
RDX: 0000000000000000 RSI: 00007f499313df7b RDI: 0000000008000000
RBP: 00007f499334f7b8 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000044000 R11: 0000000000000246 R12: 0000000000000008
R13: 0000000000000003 R14: 00007ffefe2738a8 R15: 0000000000000000
</TASK>
Showing all locks held in the system:
3 locks held by kworker/0:1/13:
#0: ffff888016c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc90000d27d00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffffffff8c323528 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
#2: ffffffff8c323528 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3a5/0x750 kernel/rcu/tree_exp.h:845
1 lock held by khungtaskd/27:
#0: ffffffff8c31eaa0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/3948:
#0: ffff88802c46e098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc900025e62e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x5df/0x1a70 drivers/tty/n_tty.c:2158
2 locks held by kworker/0:6/4234:
#0: ffff888016c72138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc9000323fd00 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
4 locks held by kworker/u4:5/4280:
#0: ffff888016dcd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
#1: ffffc9000347fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
#2: ffffffff8d4308d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x148/0xba0 net/core/net_namespace.c:589
#3: ffffffff8c323528 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#3: ffffffff8c323528 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x2d1/0x750 kernel/rcu/tree_exp.h:845
2 locks held by kworker/1:20/13155:
1 lock held by syz-executor/16611:
#0: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: list_lru_destroy+0x46/0x430 mm/list_lru.c:628
1 lock held by syz.0.4378/17578:
#0: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: list_lru_destroy+0x46/0x430 mm/list_lru.c:628
10 locks held by syz.6.4389/17614:
1 lock held by syz.9.4399/17646:
#0: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: list_lru_destroy+0x46/0x430 mm/list_lru.c:628
2 locks held by syz-executor/17679:
#0: ffff888047f800e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17685:
#0: ffff888065e100e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17688:
#0: ffff88806253e0e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17697:
#0: ffff888064fd00e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
1 lock held by syz.4.4413/17700:
#0: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: list_lru_destroy+0x46/0x430 mm/list_lru.c:628
2 locks held by syz-executor/17705:
#0: ffff8880608d80e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17715:
#0: ffff888067a5e0e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17719:
#0: ffff8880780f20e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17725:
#0: ffff8880616840e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17728:
#0: ffff88802c0620e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17732:
#0: ffff8881474d40e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17741:
#0: ffff888020e8e0e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17745:
#0: ffff88802f3f80e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17749:
#0: ffff88807e13a0e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17752:
#0: ffff88807d47a0e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
2 locks held by syz-executor/17756:
#0: ffff888020eae0e0 (&type->s_umount_key#23/1){+.+.}-{3:3}, at: alloc_super+0x201/0x950 fs/super.c:229
#1: ffffffff8c3fe9d0 (memcg_cache_ids_sem){++++}-{3:3}, at: __list_lru_init+0x7f/0xa50 mm/list_lru.c:594
=============================================
NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x188/0x250 lib/dump_stack.c:106
nmi_cpu_backtrace+0x3a2/0x3d0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
watchdog+0xe0f/0xe50 kernel/hung_task.c:369
kthread+0x436/0x520 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 17614 Comm: syz.6.4389 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:__lock_acquire+0xfed/0x7d10 kernel/locking/lockdep.c:-1
Code: 40 80 e1 07 80 c1 03 38 c1 7c c7 48 8b 7c 24 40 e8 78 0c 61 00 eb bb 44 89 e0 25 ff 1f 00 00 41 c1 ec 03 41 81 e4 00 60 00 00 <41> 09 c4 4c 89 fe 48 c1 ee 20 89 f0 c1 c0 04 41 29 f4 44 31 e0 44
RSP: 0018:ffffc90000dcf520 EFLAGS: 00000046
RAX: 000000000000005e RBX: 00000000000e404b RCX: ffff88804739c670
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff901d10c8
RBP: ffffc90000dcf770 R08: dffffc0000000000 R09: 1ffffffff203a219
R10: dffffc0000000000 R11: fffffbfff203a21a R12: 0000000000000000
R13: ffff88804739bb80 R14: 1ffff11008e738cc R15: ffffffffffffffff
FS: 00007f1ca94c36c0(0000) GS:ffff8880b9100000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000056244762a000 CR3: 00000000483a6000 CR4: 00000000003506e0
DR0: 0000200000000300 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff1 DR7: 0000000000000600
Call Trace:
<IRQ>
lock_acquire+0x19e/0x400 kernel/locking/lockdep.c:5623
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xb0/0x100 kernel/locking/spinlock.c:162
hrtimer_interrupt+0xf8/0x8d0 kernel/time/hrtimer.c:1792
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
__sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0x4d/0xc0 arch/x86/kernel/apic/apic.c:1108
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:strcpy+0x89/0xa0 lib/string.c:95
Code: f4 e8 4b 8b b7 fd 4c 89 e6 4c 89 f8 eb be 89 fa 80 e2 07 38 ca 7c cd 49 89 c7 49 89 f4 e8 7f 8b b7 fd 4c 89 e6 4c 89 f8 eb ba <5b> 41 5c 41 5e 41 5f 5d c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40
RSP: 0018:ffffc90000dcfb70 EFLAGS: 00000246
RAX: ffffe8ffffbae018 RBX: dffffc0000000000 RCX: 0000000000000000
RDX: 000000008b2f1f04 RSI: ffffffff8b2f1f40 RDI: ffffe8ffffbae024
RBP: 0000000000000000 R08: 0000000000000002 R09: 0000000000000001
R10: dffffc0000000000 R11: fffffbfff1b13b16 R12: dffffc0000000000
R13: 1ffff920001b9f7c R14: 000000000000000d R15: ffff8880227caf70
perf_trace_lock_acquire+0x2f5/0x3e0 include/trace/events/lock.h:13
trace_lock_acquire include/trace/events/lock.h:13 [inline]
lock_acquire+0x3d7/0x400 kernel/locking/lockdep.c:5594
__raw_read_lock_bh include/linux/rwlock_api_smp.h:176 [inline]
_raw_read_lock_bh+0x3a/0x50 kernel/locking/spinlock.c:252
ebt_do_table+0xe1/0x2740 net/bridge/netfilter/ebtables.c:211
nf_hook_entry_hookfn include/linux/netfilter.h:142 [inline]
nf_hook_slow+0xb9/0x200 net/netfilter/core.c:584
nf_hook include/linux/netfilter.h:257 [inline]
NF_HOOK+0x1f3/0x380 include/linux/netfilter.h:300
__br_forward+0x433/0x610 net/bridge/br_forward.c:115
deliver_clone net/bridge/br_forward.c:131 [inline]
maybe_deliver+0xb5/0x150 net/bridge/br_forward.c:190
br_flood+0x2fc/0x450 net/bridge/br_forward.c:232
br_handle_frame_finish+0xfbd/0x1270 net/bridge/br_input.c:180
br_nf_hook_thresh+0x3c9/0x4a0 net/bridge/br_netfilter_hooks.c:1155
br_nf_pre_routing_finish_ipv6+0x9c2/0xc90 net/bridge/br_netfilter_ipv6.c:-1
NF_HOOK include/linux/netfilter.h:302 [inline]
br_nf_pre_routing_ipv6+0x355/0x680 net/bridge/br_netfilter_ipv6.c:237
nf_hook_entry_hookfn include/linux/netfilter.h:142 [inline]
nf_hook_bridge_pre net/bridge/br_input.c:242 [inline]
br_handle_frame+0x893/0x1190 net/bridge/br_input.c:384
__netif_receive_skb_core+0xfef/0x3690 net/core/dev.c:5419
__netif_receive_skb_one_core net/core/dev.c:5523 [inline]
__netif_receive_skb+0x74/0x290 net/core/dev.c:5639
process_backlog+0x370/0x790 net/core/dev.c:6516
__napi_poll+0xc0/0x430 net/core/dev.c:7075
napi_poll net/core/dev.c:7142 [inline]
net_rx_action+0x4d4/0xa10 net/core/dev.c:7232
handle_softirqs+0x339/0x830 kernel/softirq.c:576
__do_softirq kernel/softirq.c:610 [inline]
invoke_softirq kernel/softirq.c:450 [inline]
__irq_exit_rcu+0x13b/0x230 kernel/softirq.c:659
irq_exit_rcu+0x5/0x20 kernel/softirq.c:671
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
sysvec_apic_timer_interrupt+0xa0/0xc0 arch/x86/kernel/apic/apic.c:1108
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:finish_lock_switch+0x134/0x280 kernel/sched/core.c:4804
Code: be ff ff ff ff e8 4c 80 67 08 85 c0 74 4a 4d 85 ff 75 66 0f 1f 44 00 00 48 89 df e8 b6 eb 70 08 e8 41 e0 2a 00 fb 48 83 c4 08 <5b> 41 5c 41 5d 41 5e 41 5f 5d c3 48 89 df e8 c9 09 fe ff 43 80 3c
RSP: 0018:ffffc9000327f428 EFLAGS: 00000286
RAX: d1b070bd808a7400 RBX: ffff8880b913a340 RCX: d1b070bd808a7400
RDX: dffffc0000000000 RSI: ffffffff8a2b2780 RDI: ffffffff8a79f780
RBP: 1ffff11017227613 R08: ffffffff901d10cf R09: 1ffffffff203a219
R10: dffffc0000000000 R11: fffffbfff203a21a R12: 1ffff110172275c1
R13: dffffc0000000000 R14: ffff8880b913ae08 R15: 0000000000000000
finish_task_switch+0x12f/0x640 kernel/sched/core.c:4921
context_switch kernel/sched/core.c:5052 [inline]
__schedule+0x11f7/0x43c0 kernel/sched/core.c:6395
preempt_schedule_irq+0xbb/0x160 kernel/sched/core.c:6799
irqentry_exit+0x63/0x70 kernel/entry/common.c:432
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:29 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
RIP: 0010:___might_sleep+0x16f/0x610 kernel/sched/core.c:9626
Code: 0f b6 04 26 84 c0 0f 85 53 03 00 00 45 03 7d 00 44 3b 7c 24 2c 75 70 42 c6 44 23 0c 00 48 c7 84 24 a0 00 00 00 00 00 00 00 9c <8f> 84 24 a0 00 00 00 f7 84 24 a0 00 00 00 00 02 00 00 42 c6 44 23
RSP: 0018:ffffc9000327f818 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 1ffff9200064ff0c RCX: d1b070bd808a7400
RDX: ffff88804739bb80 RSI: ffffffff8a2b3a20 RDI: ffffffff8a79f780
RBP: ffffc9000327f940 R08: ffffffff8d89d8af R09: 1ffffffff1b13b15
R10: dffffc0000000000 R11: fffffbfff1b13b16 R12: dffffc0000000000
R13: ffff88804739bfbc R14: 1ffff11008e737f7 R15: 0000000000000000
might_alloc include/linux/sched/mm.h:209 [inline]
slab_pre_alloc_hook+0x42/0xc0 mm/slab.h:492
slab_alloc_node mm/slub.c:3139 [inline]
slab_alloc mm/slub.c:3233 [inline]
kmem_cache_alloc_trace+0x47/0x2a0 mm/slub.c:3250
kmalloc include/linux/slab.h:607 [inline]
__memcg_init_list_lru_node mm/list_lru.c:339 [inline]
memcg_update_list_lru_node mm/list_lru.c:396 [inline]
memcg_update_list_lru mm/list_lru.c:473 [inline]
memcg_update_all_list_lrus+0x2cc/0xba0 mm/list_lru.c:510
memcg_alloc_cache_id mm/memcontrol.c:2964 [inline]
memcg_online_kmem mm/memcontrol.c:3656 [inline]
mem_cgroup_css_alloc+0xb25/0x1640 mm/memcontrol.c:5310
css_create kernel/cgroup/cgroup.c:5388 [inline]
cgroup_apply_control_enable+0x3cc/0xb00 kernel/cgroup/cgroup.c:3212
cgroup_mkdir+0xcc0/0xec0 kernel/cgroup/cgroup.c:5608
kernfs_iop_mkdir+0x24c/0x3d0 fs/kernfs/dir.c:1175
vfs_mkdir+0x387/0x570 fs/namei.c:4073
do_mkdirat+0x1df/0x5b0 fs/namei.c:4098
__do_sys_mkdirat fs/namei.c:4113 [inline]
__se_sys_mkdirat fs/namei.c:4111 [inline]
__x64_sys_mkdirat+0x85/0x90 fs/namei.c:4111
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f1cab267eb9
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1ca94c3028 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 00007f1cab4e2fa0 RCX: 00007f1cab267eb9
RDX: 00000000000001ff RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f1cab2d5c1f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f1cab4e3038 R14: 00007f1cab4e2fa0 R15: 00007ffe4c7bfb58
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup