[v5.15] INFO: task hung in __sync_dirty_buffer

0 views
Skip to first unread message

syzbot

unread,
Apr 8, 2023, 2:57:32 PM4/8/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d86dfc4d95cd Linux 5.15.106
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1716ab1bc80000
kernel config: https://syzkaller.appspot.com/x/.config?x=dca379fe384dda80
dashboard link: https://syzkaller.appspot.com/bug?extid=fe0a0d156f3505e58f99
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2c159eb4fcae/disk-d86dfc4d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5f50187f87c7/vmlinux-d86dfc4d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/f787f3f09c09/bzImage-d86dfc4d.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+fe0a0d...@syzkaller.appspotmail.com

INFO: task syz-executor.4:531 blocked for more than 143 seconds.
Not tainted 5.15.106-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:23040 pid: 531 ppid: 3651 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
io_schedule+0x88/0x100 kernel/sched/core.c:8472
bit_wait_io+0xe/0xc0 kernel/sched/wait_bit.c:209
__wait_on_bit_lock+0xbf/0x1a0 kernel/sched/wait_bit.c:90
out_of_line_wait_on_bit_lock+0x1d0/0x250 kernel/sched/wait_bit.c:117
wait_on_bit_lock_io include/linux/wait_bit.h:208 [inline]
__lock_buffer fs/buffer.c:69 [inline]
lock_buffer include/linux/buffer_head.h:402 [inline]
__sync_dirty_buffer+0x120/0x380 fs/buffer.c:3144
__ext4_handle_dirty_metadata+0x288/0x800 fs/ext4/ext4_jbd2.c:381
ext4_handle_dirty_dirblock+0x35e/0x6f0 fs/ext4/namei.c:438
ext4_finish_convert_inline_dir+0x57b/0x6f0 fs/ext4/inline.c:1188
ext4_convert_inline_data_nolock+0xa15/0xda0 fs/ext4/inline.c:1265
ext4_try_add_inline_entry+0x805/0xb60 fs/ext4/inline.c:1338
ext4_add_entry+0x6c2/0x12b0 fs/ext4/namei.c:2384
ext4_add_nondir+0x98/0x290 fs/ext4/namei.c:2771
ext4_create+0x372/0x550 fs/ext4/namei.c:2816
lookup_open fs/namei.c:3392 [inline]
open_last_lookups fs/namei.c:3462 [inline]
path_openat+0x12f6/0x2f20 fs/namei.c:3669
do_filp_open+0x21c/0x460 fs/namei.c:3699
do_sys_openat2+0x13b/0x500 fs/open.c:1211
do_sys_open fs/open.c:1227 [inline]
__do_sys_openat fs/open.c:1243 [inline]
__se_sys_openat fs/open.c:1238 [inline]
__x64_sys_openat+0x243/0x290 fs/open.c:1238
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f00f9321169
RSP: 002b:00007f00f7893168 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f00f9440f80 RCX: 00007f00f9321169
RDX: 000000000000275a RSI: 0000000020000000 RDI: ffffffffffffff9c
RBP: 00007f00f937cca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffda92a5e1f R14: 00007f00f7893300 R15: 0000000000022000
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c91b920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
1 lock held by klogd/2948:
#0: ffff8880b9b39698 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
2 locks held by getty/3275:
#0: ffff88802468d098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc90002bb32e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1da0 drivers/tty/n_tty.c:2147
3 locks held by kworker/u4:25/9439:
#0: ffff888142fd2938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2279
#1: ffffc9000914fd20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2281
#2: ffff888011e620e0 (&type->s_umount_key#41){.+.+}-{3:3}, at: trylock_super+0x1b/0xf0 fs/super.c:418
3 locks held by syz-executor.4/531:
#0: ffff888080514460 (sb_writers#5){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:377
#1: ffff88801836f198 (&type->i_mutex_dir_key#4){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff88801836f198 (&type->i_mutex_dir_key#4){++++}-{3:3}, at: open_last_lookups fs/namei.c:3459 [inline]
#1: ffff88801836f198 (&type->i_mutex_dir_key#4){++++}-{3:3}, at: path_openat+0x824/0x2f20 fs/namei.c:3669
#2: ffff88801836ee70 (&ei->xattr_sem){++++}-{3:3}, at: ext4_write_lock_xattr fs/ext4/xattr.h:155 [inline]
#2: ffff88801836ee70 (&ei->xattr_sem){++++}-{3:3}, at: ext4_try_add_inline_entry+0xf2/0xb60 fs/ext4/inline.c:1296

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 5.15.106-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 29109 Comm: kworker/u4:3 Not tainted 5.15.106-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Workqueue: phy13 ieee80211_iface_work
RIP: 0010:memory_is_nonzero mm/kasan/generic.c:101 [inline]
RIP: 0010:memory_is_poisoned_n mm/kasan/generic.c:128 [inline]
RIP: 0010:memory_is_poisoned mm/kasan/generic.c:159 [inline]
RIP: 0010:check_region_inline mm/kasan/generic.c:180 [inline]
RIP: 0010:kasan_check_range+0x62/0x290 mm/kasan/generic.c:189
Code: b8 00 00 00 00 00 fc ff df 4e 8d 0c 03 4c 8d 54 37 ff 49 c1 ea 03 49 bb 01 00 00 00 00 fc ff df 4f 8d 34 1a 4c 89 f5 4c 29 cd <48> 83 fd 10 7f 26 48 85 ed 0f 84 3a 01 00 00 49 f7 d2 49 01 da 41
RSP: 0018:ffffc900043b7650 EFLAGS: 00000002
RAX: 0000000000000001 RBX: 1ffffffff1f76e33 RCX: ffffffff8162a4d8
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8fbb7198
RBP: 0000000000000001 R08: dffffc0000000000 R09: fffffbfff1f76e33
R10: 1ffffffff1f76e33 R11: dffffc0000000001 R12: ffff88801c08c468
R13: dffffc0000000000 R14: fffffbfff1f76e34 R15: ffff88801c08c448
FS: 0000000000000000(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c00b157708 CR3: 0000000023ae2000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
instrument_atomic_read include/linux/instrumented.h:71 [inline]
test_bit include/asm-generic/bitops/instrumented-non-atomic.h:134 [inline]
hlock_class kernel/locking/lockdep.c:197 [inline]
mark_lock+0x98/0x340 kernel/locking/lockdep.c:4568
mark_held_locks kernel/locking/lockdep.c:4192 [inline]
__trace_hardirqs_on_caller kernel/locking/lockdep.c:4210 [inline]
lockdep_hardirqs_on_prepare+0x27d/0x7a0 kernel/locking/lockdep.c:4277
trace_hardirqs_on+0x67/0x80 kernel/trace/trace_preemptirq.c:49
kasan_quarantine_put+0xd4/0x220 mm/kasan/quarantine.c:231
kasan_slab_free include/linux/kasan.h:230 [inline]
slab_free_hook mm/slub.c:1705 [inline]
slab_free_freelist_hook+0xdd/0x160 mm/slub.c:1731
slab_free mm/slub.c:3499 [inline]
kfree+0xf1/0x270 mm/slub.c:4559
ieee80211_bss_info_update+0x9bf/0xc80 net/mac80211/scan.c:232
ieee80211_rx_bss_info net/mac80211/ibss.c:1123 [inline]
ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1614 [inline]
ieee80211_ibss_rx_queued_mgmt+0x175e/0x2af0 net/mac80211/ibss.c:1643
ieee80211_iface_process_skb net/mac80211/iface.c:1441 [inline]
ieee80211_iface_work+0x78f/0xcc0 net/mac80211/iface.c:1495
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2306
worker_thread+0xaca/0x1280 kernel/workqueue.c:2453
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Apr 16, 2023, 4:27:43 PM4/16/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0102425ac76b Linux 6.1.24
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13489f8fc80000
kernel config: https://syzkaller.appspot.com/x/.config?x=cb976cea3176ce66
dashboard link: https://syzkaller.appspot.com/bug?extid=8051920e29bc43760bd7
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2bbac2ad18f4/disk-0102425a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/9000ac7a5566/vmlinux-0102425a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/08367306fcea/bzImage-0102425a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+805192...@syzkaller.appspotmail.com

INFO: task syz-executor.0:4923 blocked for more than 143 seconds.
Not tainted 6.1.24-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.0 state:D stack:24440 pid:4923 ppid:3666 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5241 [inline]
__schedule+0x132c/0x4330 kernel/sched/core.c:6554
schedule+0xbf/0x180 kernel/sched/core.c:6630
io_schedule+0x88/0x100 kernel/sched/core.c:8774
bit_wait_io+0xe/0xc0 kernel/sched/wait_bit.c:209
__wait_on_bit_lock+0xb9/0x190 kernel/sched/wait_bit.c:90
out_of_line_wait_on_bit_lock+0x1d0/0x250 kernel/sched/wait_bit.c:117
wait_on_bit_lock_io include/linux/wait_bit.h:208 [inline]
__lock_buffer fs/buffer.c:69 [inline]
lock_buffer include/linux/buffer_head.h:397 [inline]
__sync_dirty_buffer+0x11c/0x380 fs/buffer.c:2732
__ext4_handle_dirty_metadata+0x2a2/0x810 fs/ext4/ext4_jbd2.c:381
ext4_convert_inline_data_nolock+0xad1/0xda0 fs/ext4/inline.c:1255
ext4_convert_inline_data+0x4cf/0x610 fs/ext4/inline.c:2066
ext4_page_mkwrite+0x1e4/0x10d0 fs/ext4/inode.c:6165
do_page_mkwrite+0x1a1/0x5f0 mm/memory.c:2973
wp_page_shared+0x164/0x380 mm/memory.c:3319
handle_pte_fault mm/memory.c:4982 [inline]
__handle_mm_fault mm/memory.c:5106 [inline]
handle_mm_fault+0x251b/0x5140 mm/memory.c:5227
do_user_addr_fault arch/x86/mm/fault.c:1428 [inline]
handle_page_fault arch/x86/mm/fault.c:1519 [inline]
exc_page_fault+0x58d/0x790 arch/x86/mm/fault.c:1575
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0033:0x7f0a17e2bde9
RSP: 002b:00007ffddfbe46e0 EFLAGS: 00010246
RAX: 00000000200001c0 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000004 RSI: 0000000000000000 RDI: 0000555555e8c2e8
RBP: 00007ffddfbe47d8 R08: 0000000000000000 R09: 0000000000000000
R10: 00007f0a17a02a28 R11: 0000000000000246 R12: 0000000000059144
R13: 00007ffddfbe4800 R14: 00007f0a17fac050 R15: 0000000000000032
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8cf26870 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8cf27070 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
#0: ffffffff8cf266a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
3 locks held by kworker/1:2/154:
#0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc90002dcfd20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
#2: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:263
2 locks held by getty/3309:
#0: ffff888028de2098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2177
1 lock held by syz-executor.2/3674:
#0: ffffffff8cf2bc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#0: ffffffff8cf2bc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x479/0x8a0 kernel/rcu/tree_exp.h:948
2 locks held by kworker/u4:2/3701:
#0: ffff888012469138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc900043e7d20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
4 locks held by syz-executor.0/4923:
#0: ffff888017c744d8 (&mm->mmap_lock#2){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
#0: ffff888017c744d8 (&mm->mmap_lock#2){++++}-{3:3}, at: do_user_addr_fault arch/x86/mm/fault.c:1369 [inline]
#0: ffff888017c744d8 (&mm->mmap_lock#2){++++}-{3:3}, at: handle_page_fault arch/x86/mm/fault.c:1519 [inline]
#0: ffff888017c744d8 (&mm->mmap_lock#2){++++}-{3:3}, at: exc_page_fault+0x182/0x790 arch/x86/mm/fault.c:1575
#1: ffff888082918558 (sb_pagefaults){.+.+}-{0:0}, at: __sb_start_write include/linux/fs.h:1832 [inline]
#1: ffff888082918558 (sb_pagefaults){.+.+}-{0:0}, at: sb_start_pagefault include/linux/fs.h:1936 [inline]
#1: ffff888082918558 (sb_pagefaults){.+.+}-{0:0}, at: ext4_page_mkwrite+0x1ad/0x10d0 fs/ext4/inode.c:6160
#2: ffff88808547cbd8 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:811 [inline]
#2: ffff88808547cbd8 (mapping.invalidate_lock){++++}-{3:3}, at: ext4_page_mkwrite+0x1d7/0x10d0 fs/ext4/inode.c:6163
#3: ffff88808547c700 (&ei->xattr_sem){++++}-{3:3}, at: ext4_write_lock_xattr fs/ext4/xattr.h:155 [inline]
#3: ffff88808547c700 (&ei->xattr_sem){++++}-{3:3}, at: ext4_convert_inline_data+0x3ab/0x610 fs/ext4/inline.c:2064
4 locks held by kworker/u4:17/6835:
#0: ffff888012606938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc90005e07d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
#2: ffffffff8e07dd10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:563
#3: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: xfrmi_exit_batch_net+0xb0/0x350 net/xfrm/xfrm_interface.c:958
2 locks held by kworker/1:19/6985:
#0: ffff888012466538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
#1: ffffc900094a7d20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
2 locks held by syz-executor.0/7620:
#0: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088
#1: ffffffff8cf2bc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#1: ffffffff8cf2bc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3b0/0x8a0 kernel/rcu/tree_exp.h:948
1 lock held by syz-executor.0/7626:
#0: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088
1 lock held by syz-executor.0/7627:
#0: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e08a0c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.24-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xf18/0xf60 kernel/hung_task.c:377
kthread+0x268/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 3897 Comm: kworker/u4:8 Not tainted 6.1.24-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Workqueue: bat_events batadv_purge_orig
RIP: 0010:lockdep_hardirqs_off+0x3/0x100 kernel/locking/lockdep.c:4395
Code: 03 00 0f 85 77 ff ff ff 48 c7 c7 c0 c4 eb 8a 48 c7 c6 c0 c5 eb 8a e8 1c 24 d4 f6 0f 0b e9 5d ff ff ff e8 a0 fe ff ff 41 56 53 <48> 83 ec 10 65 48 8b 04 25 28 00 00 00 48 89 44 24 08 83 3d f8 ad
RSP: 0018:ffffc9000522f9f8 EFLAGS: 00000006
RAX: 0000000000000246 RBX: ffffffff81539ff2 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 0000000000000201 RDI: ffffffff81539ff2
RBP: ffffc9000522fad0 R08: dffffc0000000000 R09: ffffed1010142899
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff92000a45f4c R14: ffffc9000522fa60 R15: 0000000000000201
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000056078e794ad8 CR3: 000000000cc8e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
trace_hardirqs_off+0xe/0x40 kernel/trace/trace_preemptirq.c:76
__local_bh_enable_ip+0x102/0x1f0 kernel/softirq.c:378
spin_unlock_bh include/linux/spinlock.h:395 [inline]
batadv_purge_orig_ref+0x14b9/0x15a0 net/batman-adv/originator.c:1259
batadv_purge_orig+0x15/0x60 net/batman-adv/originator.c:1272
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
kthread+0x268/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

syzbot

unread,
Aug 22, 2023, 11:19:49 AM8/22/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.

syzbot

unread,
Aug 23, 2023, 5:07:35 AM8/23/23
to syzkaller...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages