Hello,
syzbot found the following issue on:
HEAD commit: 08667c1437c0 Linux 6.6.132
git tree: linux-6.6.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=123a75da580000
kernel config:
https://syzkaller.appspot.com/x/.config?x=c5b35c4db8465904
dashboard link:
https://syzkaller.appspot.com/bug?extid=803b72dd9bcfbb11e5d0
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/72bea5e4b83d/disk-08667c14.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/1d340a6f6d20/vmlinux-08667c14.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/d27f8b9c0a07/bzImage-08667c14.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+803b72...@syzkaller.appspotmail.com
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P5891/1:b..l P5888/1:b..l P5863/1:b..l P5889/1:b..l
rcu: (detected by 0, t=10502 jiffies, g=9681, q=1236 ncpus=2)
task:syz.1.18 state:R running task stack:26568 pid:5889 ppid:5769 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6867
preempt_schedule+0xc0/0xd0 kernel/sched/core.c:6891
preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
__raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
_raw_spin_unlock+0x3a/0x40 kernel/locking/spinlock.c:186
spin_unlock include/linux/spinlock.h:391 [inline]
zap_pte_range mm/memory.c:1523 [inline]
zap_pmd_range mm/memory.c:1571 [inline]
zap_pud_range mm/memory.c:1600 [inline]
zap_p4d_range mm/memory.c:1621 [inline]
unmap_page_range+0x2315/0x3000 mm/memory.c:1642
unmap_vmas+0x286/0x3f0 mm/memory.c:1732
exit_mmap+0x238/0xb90 mm/mmap.c:3302
__mmput+0x118/0x3c0 kernel/fork.c:1355
exit_mm+0x1f2/0x2c0 kernel/exit.c:569
do_exit+0x8dd/0x2460 kernel/exit.c:870
do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
get_signal+0x12fc/0x13f0 kernel/signal.c:2902
arch_do_signal_or_restart+0xc2/0x800 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
__syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
do_syscall_64+0x61/0xa0 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fa35539c819
RSP: 002b:00007ffd36237e18 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: ffffffffffffff92 RBX: 000000000001537e RCX: 00007fa35539c819
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa355615fac
RBP: 0000000000000032 R08: 0000000000745d1e R09: 0000000000000000
R10: 00007ffd36237f20 R11: 0000000000000246 R12: 00007ffd36237f40
R13: 00007fa355615fac R14: 00000000000153b0 R15: 00007ffd36237f20
</TASK>
task:syz.1.2 state:R running task stack:22864 pid:5863 ppid:5769 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
preempt_schedule_irq+0xbf/0x150 kernel/sched/core.c:7010
irqentry_exit+0x67/0x70 kernel/entry/common.c:438
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:lock_is_held_type+0x13e/0x190 kernel/locking/lockdep.c:5830
Code: 75 40 48 c7 04 24 00 00 00 00 9c 8f 04 24 f7 04 24 00 02 00 00 75 46 41 f7 c5 00 02 00 00 74 01 fb 65 48 8b 04 25 28 00 00 00 <48> 3b 44 24 08 75 3c 89 e8 48 83 c4 10 5b 41 5c 41 5d 41 5e 41 5f
RSP: 0018:ffffc90004a86b80 EFLAGS: 00000206
RAX: 2d6ae74d07382700 RBX: ffff88802c9a1e00 RCX: 2d6ae74d07382700
RDX: 0000000000000000 RSI: ffffffff8acadb60 RDI: ffffffff8b1c8de0
RBP: 0000000000000000 R08: dffffc0000000000 R09: 1ffffffff22388a0
R10: dffffc0000000000 R11: fffffbfff22388a1 R12: 0000000000000004
R13: 0000000000000246 R14: ffffffff8d15bbb8 R15: ffff88802c9a2980
lock_is_held include/linux/lockdep.h:288 [inline]
task_css include/linux/cgroup.h:436 [inline]
mem_cgroup_from_task+0x77/0x110 mm/memcontrol.c:1038
get_mem_cgroup_from_mm+0xd6/0x290 mm/memcontrol.c:1091
__mem_cgroup_charge+0x15/0x80 mm/memcontrol.c:7110
mem_cgroup_charge include/linux/memcontrol.h:686 [inline]
__filemap_add_folio+0xb57/0x17b0 mm/filemap.c:859
filemap_add_folio+0xae/0x3c0 mm/filemap.c:967
page_cache_ra_unbounded+0x1a3/0x770 mm/readahead.c:250
do_async_mmap_readahead mm/filemap.c:3274 [inline]
filemap_fault+0x565/0x15b0 mm/filemap.c:3328
__do_fault+0x13b/0x4d0 mm/memory.c:4244
do_read_fault mm/memory.c:4638 [inline]
do_fault mm/memory.c:4775 [inline]
do_pte_missing mm/memory.c:3689 [inline]
handle_pte_fault mm/memory.c:5047 [inline]
__handle_mm_fault mm/memory.c:5188 [inline]
handle_mm_fault+0x2299/0x4c00 mm/memory.c:5353
faultin_page mm/gup.c:868 [inline]
__get_user_pages+0x5d0/0x1380 mm/gup.c:1167
__get_user_pages_locked mm/gup.c:1431 [inline]
get_dump_page+0x10c/0x200 mm/gup.c:1939
dump_user_range+0x127/0x860 fs/coredump.c:982
elf_core_dump+0x31d0/0x3770 fs/binfmt_elf.c:2184
do_coredump+0x17cc/0x24d0 fs/coredump.c:833
get_signal+0x1133/0x13f0 kernel/signal.c:2888
arch_do_signal_or_restart+0xc2/0x800 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
irqentry_exit_to_user_mode+0x9/0x30 kernel/entry/common.c:315
exc_page_fault+0x8c/0x100 arch/x86/mm/fault.c:1519
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:608
RIP: 0033:0xa5
RSP: 002b:0000200000000438 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 00007fa355616180 RCX: 00007fa35539c819
RDX: 00002000000004c0 RSI: 0000200000000430 RDI: 0000000008000000
RBP: 00007fa355432c91 R08: 0000200000000540 R09: 0000200000000540
R10: 0000200000000500 R11: 0000000000000206 R12: 0000000000000000
R13: 00007fa355616218 R14: 00007fa355616180 R15: 00007ffd36237cb8
</TASK>
task:syz.3.17 state:R running task stack:24232 pid:5888 ppid:5773 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6867
preempt_schedule+0xc0/0xd0 kernel/sched/core.c:6891
preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
__raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
_raw_spin_unlock+0x3a/0x40 kernel/locking/spinlock.c:186
spin_unlock include/linux/spinlock.h:391 [inline]
insert_page mm/memory.c:1869 [inline]
vm_insert_page+0x57c/0x8c0 mm/memory.c:2021
kcov_mmap+0xe9/0x160 kernel/kcov.c:505
call_mmap include/linux/fs.h:2023 [inline]
mmap_file mm/internal.h:98 [inline]
__mmap_region mm/mmap.c:2790 [inline]
mmap_region+0xf8e/0x2000 mm/mmap.c:2941
do_mmap+0x92c/0x10a0 mm/mmap.c:1385
vm_mmap_pgoff+0x1c4/0x3f0 mm/util.c:556
ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f8efbb9c582
RSP: 002b:00007ffe6c81a298 EFLAGS: 00000206 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f8ef99f6000 RCX: 00007f8efbb9c582
RDX: 0000000000000003 RSI: 0000000000400000 RDI: 00007f8ef99f6000
RBP: 0000000000000011 R08: 00000000000000db R09: 0000000000000000
R10: 0000000000000011 R11: 0000000000000206 R12: 0000000000000003
R13: 0000000000000003 R14: 0000000000000000 R15: 00007f8efbe15fa0
</TASK>
task:syz.3.17 state:R running task stack:24040 pid:5891 ppid:5773 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
preempt_schedule_irq+0xbf/0x150 kernel/sched/core.c:7010
irqentry_exit+0x67/0x70 kernel/entry/common.c:438
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:__sanitizer_cov_trace_pc+0x0/0x60 kernel/kcov.c:212
Code: 00 0f 0b 0f 1f 80 00 00 00 00 f3 0f 1e fa 53 48 89 fb e8 13 00 00 00 48 8b 3d 8c ec 03 0d 48 89 de 5b e9 23 67 57 00 cc cc cc <f3> 0f 1e fa 48 8b 04 24 65 48 8b 0d e0 93 7c 7e 65 8b 15 e1 93 7c
RSP: 0018:ffffc90004c176b0 EFLAGS: 00000246
RAX: ffffffff813b3600 RBX: 00007f8efbb9c819 RCX: 0000000000000001
RDX: ffff88807df63c00 RSI: 0000000000000001 RDI: 00007f8efbb9c819
RBP: 0000000000000001 R08: ffff88807df63c00 R09: 0000000000000003
R10: 0000000000000004 R11: 0000000000000002 R12: ffffffff8aa000d0
R13: 1ffff1100fbec82e R14: dffffc0000000000 R15: 1ffff92000982eee
in_gate_area_no_mm+0xe/0x50 arch/x86/entry/vsyscall/vsyscall_64.c:331
is_kernel_text include/linux/kallsyms.h:31 [inline]
core_kernel_text kernel/extable.c:68 [inline]
kernel_text_address+0x2d/0xd0 kernel/extable.c:99
__kernel_text_address+0xd/0x30 kernel/extable.c:79
unwind_get_return_address+0x5d/0xc0 arch/x86/kernel/unwind_orc.c:369
arch_stack_walk+0x11d/0x190 arch/x86/kernel/stacktrace.c:26
stack_trace_save+0xaa/0x100 kernel/stacktrace.c:122
save_stack+0x125/0x230 mm/page_owner.c:128
__reset_page_owner+0x4e/0x190 mm/page_owner.c:149
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1181 [inline]
free_unref_page_prepare+0x7b2/0x8c0 mm/page_alloc.c:2365
free_unref_page+0x32/0x2e0 mm/page_alloc.c:2458
__slab_free+0x35a/0x400 mm/slub.c:3736
qlink_free mm/kasan/quarantine.c:166 [inline]
qlist_free_all+0x75/0xd0 mm/kasan/quarantine.c:185
kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
__kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:306
kasan_slab_alloc include/linux/kasan.h:188 [inline]
slab_post_alloc_hook+0x6e/0x4b0 mm/slab.h:767
slab_alloc_node mm/slub.c:3495 [inline]
kmem_cache_alloc_node+0x14c/0x320 mm/slub.c:3540
perf_event_alloc+0x15a/0x21b0 kernel/events/core.c:12099
__do_sys_perf_event_open kernel/events/core.c:12718 [inline]
__se_sys_perf_event_open+0x740/0x1c50 kernel/events/core.c:12609
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f8efbb9c819
RSP: 002b:00007f8efca80028 EFLAGS: 00000246 ORIG_RAX: 000000000000012a
RAX: ffffffffffffffda RBX: 00007f8efbe16090 RCX: 00007f8efbb9c819
RDX: ffefffffffffffff RSI: 0000000000000000 RDI: 0000200000000fc0
RBP: 00007f8efbc32c91 R08: 0000000000000000 R09: 0000000000000000
R10: ffffffffffffffff R11: 0000000000000246 R12: 0000000000000000
R13: 00007f8efbe16128 R14: 00007f8efbe16090 R15: 00007ffe6c81a1e8
</TASK>
rcu: rcu_preempt kthread starved for 6864 jiffies! g9681 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt state:R running task stack:27656 pid:17 ppid:2 flags:0x00004000
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5381 [inline]
__schedule+0x1553/0x45a0 kernel/sched/core.c:6700
schedule+0xbd/0x170 kernel/sched/core.c:6774
schedule_timeout+0x188/0x2d0 kernel/time/timer.c:2168
rcu_gp_fqs_loop+0x313/0x1590 kernel/rcu/tree.c:1667
rcu_gp_kthread+0x9d/0x3b0 kernel/rcu/tree.c:1866
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 5882 Comm: syz.2.15 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:kasan_check_range+0x0/0x290 mm/kasan/generic.c:186
Code: 48 c1 ee 03 48 01 c6 48 89 c7 e8 5b b3 a7 08 31 c0 c3 0f 0b b8 ea ff ff ff c3 0f 0b b8 ea ff ff ff c3 cc cc cc cc cc cc cc cc <66> 0f 1f 00 b0 01 48 85 f6 0f 84 b4 01 00 00 55 41 57 41 56 41 55
RSP: 0018:ffffc900001ef898 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff81682ab7
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8e8b0c28
RBP: ffffc900001ef9a8 R08: 0000000000000001 R09: 0000000000000000
R10: dffffc0000000000 R11: fffffbfff1d16186 R12: 1ffff9200003df20
R13: ffffffff97568d48 R14: 0000000000000001 R15: dffffc0000000000
FS: 00007f978c7486c0(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055555e01e7d0 CR3: 000000002e8c8000 CR4: 00000000003506e0
Call Trace:
<IRQ>
instrument_atomic_read include/linux/instrumented.h:68 [inline]
_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
cpumask_test_cpu include/linux/cpumask.h:504 [inline]
cpu_online include/linux/cpumask.h:1082 [inline]
trace_lock_acquire include/trace/events/lock.h:24 [inline]
lock_acquire+0xb7/0x420 kernel/locking/lockdep.c:5725
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xb4/0x100 kernel/locking/spinlock.c:162
debug_object_deactivate+0x67/0x390 lib/debugobjects.c:764
debug_hrtimer_deactivate kernel/time/hrtimer.c:455 [inline]
__run_hrtimer kernel/time/hrtimer.c:1718 [inline]
__hrtimer_run_queues+0x2cb/0xc40 kernel/time/hrtimer.c:1814
hrtimer_interrupt+0x3c9/0x9c0 kernel/time/hrtimer.c:1876
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1077 [inline]
__sysvec_apic_timer_interrupt+0xfb/0x3b0 arch/x86/kernel/apic/apic.c:1094
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0x51/0xc0 arch/x86/kernel/apic/apic.c:1088
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:lock_acquire+0x208/0x420 kernel/locking/lockdep.c:5758
Code: f7 84 24 80 00 00 00 00 02 00 00 43 c6 44 3c 04 f8 0f 85 f0 00 00 00 41 f7 c6 00 02 00 00 74 01 fb 48 c7 44 24 60 0e 36 e0 45 <4b> c7 04 3c 00 00 00 00 43 c7 44 3c 08 00 00 00 00 65 48 8b 04 25
RSP: 0018:ffffc900001efe40 EFLAGS: 00000206
RAX: 0000000000000001 RBX: 0000000000000000 RCX: 433e72003e2f8400
RDX: 0000000000000000 RSI: ffffffff8acadb60 RDI: ffffffff8b1c8de0
RBP: ffffc900001eff48 R08: dffffc0000000000 R09: 1ffffffff22388a0
R10: dffffc0000000000 R11: fffffbfff22388a1 R12: 1ffff9200003dfd4
R13: ffffffff8d1320a0 R14: 0000000000000246 R15: dffffc0000000000
rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
rcu_read_lock include/linux/rcupdate.h:786 [inline]
net_generic+0x3a/0x240 include/net/netns/generic.h:45
is_vlan_ip net/bridge/br_netfilter_hooks.c:92 [inline]
br_nf_forward_ip+0x388/0x1110 net/bridge/br_netfilter_hooks.c:720
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_slow+0xbd/0x200 net/netfilter/core.c:626
nf_hook include/linux/netfilter.h:259 [inline]
NF_HOOK+0x23e/0x3e0 include/linux/netfilter.h:302
__br_forward+0x433/0x610 net/bridge/br_forward.c:115
deliver_clone net/bridge/br_forward.c:131 [inline]
maybe_deliver+0xb5/0x150 net/bridge/br_forward.c:191
br_flood+0x31b/0x670 net/bridge/br_forward.c:237
br_handle_frame_finish+0x13c5/0x18f0 net/bridge/br_input.c:215
br_nf_hook_thresh+0x3cd/0x4a0 net/bridge/br_netfilter_hooks.c:1184
br_nf_pre_routing_finish_ipv6+0x9dc/0xd00 net/bridge/br_netfilter_ipv6.c:-1
NF_HOOK include/linux/netfilter.h:304 [inline]
br_nf_pre_routing_ipv6+0x349/0x6b0 net/bridge/br_netfilter_ipv6.c:184
nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
br_handle_frame+0x1245/0x14d0 net/bridge/br_input.c:424
__netif_receive_skb_core+0xfab/0x3af0 net/core/dev.c:5528
__netif_receive_skb_one_core net/core/dev.c:5632 [inline]
__netif_receive_skb+0x74/0x290 net/core/dev.c:5748
process_backlog+0x391/0x6f0 net/core/dev.c:6076
__napi_poll+0xc0/0x460 net/core/dev.c:6638
napi_poll net/core/dev.c:6705 [inline]
net_rx_action+0x616/0xc40 net/core/dev.c:6841
handle_softirqs+0x280/0x820 kernel/softirq.c:578
__do_softirq kernel/softirq.c:612 [inline]
invoke_softirq kernel/softirq.c:452 [inline]
__irq_exit_rcu+0xd3/0x190 kernel/softirq.c:661
irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
</IRQ>
<TASK>
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:raw_spin_rq_unlock_irq+0x13/0x90 kernel/sched/sched.h:1392
Code: 5d 41 5e 41 5f 5d c3 e8 6b 45 27 09 66 2e 0f 1f 84 00 00 00 00 00 90 41 57 41 56 53 66 90 e8 e4 ee 30 09 e8 cf b4 2e 00 fb 5b <41> 5e 41 5f c3 49 be 00 00 00 00 00 fc ff df 49 89 ff 48 8d 9f 58
RSP: 0018:ffffc90004bf71e8 EFLAGS: 00000282
RAX: 433e72003e2f8400 RBX: ffff8880b8f3cd48 RCX: 433e72003e2f8400
RDX: dffffc0000000000 RSI: ffffffff8acac9e0 RDI: ffffffff8b1c8de0
RBP: ffffc90004bf7410 R08: ffffffff911c45af R09: 1ffffffff22388b5
R10: dffffc0000000000 R11: fffffbfff22388b6 R12: dffffc0000000000
R13: ffff888022b03c00 R14: ffff8880b8f3c000 R15: ffff888022b03c00
__schedule+0x179f/0x45a0 kernel/sched/core.c:6704
preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6867
preempt_schedule+0xc0/0xd0 kernel/sched/core.c:6891
preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
__raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
_raw_spin_unlock+0x3a/0x40 kernel/locking/spinlock.c:186
spin_unlock include/linux/spinlock.h:391 [inline]
copy_pte_range mm/memory.c:1107 [inline]
copy_pmd_range mm/memory.c:1168 [inline]
copy_pud_range mm/memory.c:1205 [inline]
copy_p4d_range mm/memory.c:1229 [inline]
copy_page_range+0x2ba0/0x3670 mm/memory.c:1323
dup_mmap kernel/fork.c:764 [inline]
dup_mm kernel/fork.c:1692 [inline]
copy_mm+0x11cb/0x1d50 kernel/fork.c:1741
copy_process+0x16f7/0x3d80 kernel/fork.c:2506
kernel_clone+0x24b/0x8a0 kernel/fork.c:2914
__do_sys_clone kernel/fork.c:3057 [inline]
__se_sys_clone kernel/fork.c:3041 [inline]
__x64_sys_clone+0x1b7/0x230 kernel/fork.c:3041
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f978b79c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f978c747fd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
RAX: ffffffffffffffda RBX: 00007f978ba15fa0 RCX: 00007f978b79c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 00007f978b832c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f978ba16038 R14: 00007f978ba15fa0 R15: 00007ffe62999cd8
</TASK>
net_ratelimit: 4476 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:e6:af:5e:3e:1f:3e, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:e6:af:5e:3e:1f:3e, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:e6:af:5e:3e:1f:3e, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:e6:af:5e:3e:1f:3e, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:e6:af:5e:3e:1f:3e, vlan:0)
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup