WARNING: locking bug in l2cap_sock_teardown_cb

7 views
Skip to first unread message

syzbot

unread,
Jan 5, 2021, 6:03:21 AM1/5/21
to da...@davemloft.net, johan....@gmail.com, ku...@kernel.org, linux-b...@vger.kernel.org, linux-...@vger.kernel.org, luiz....@gmail.com, mar...@holtmann.org, net...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 139711f0 Merge branch 'akpm' (patches from Andrew)
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=17a6d077500000
kernel config: https://syzkaller.appspot.com/x/.config?x=97ec68097e292826
dashboard link: https://syzkaller.appspot.com/bug?extid=9cde9e1af823debba3b2
compiler: gcc (GCC) 10.1.0-syz 20200507

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+9cde9e...@syzkaller.appspotmail.com

------------[ cut here ]------------
DEBUG_LOCKS_WARN_ON(1)
WARNING: CPU: 1 PID: 69 at kernel/locking/lockdep.c:202 hlock_class kernel/locking/lockdep.c:202 [inline]
WARNING: CPU: 1 PID: 69 at kernel/locking/lockdep.c:202 hlock_class kernel/locking/lockdep.c:191 [inline]
WARNING: CPU: 1 PID: 69 at kernel/locking/lockdep.c:202 check_wait_context kernel/locking/lockdep.c:4506 [inline]
WARNING: CPU: 1 PID: 69 at kernel/locking/lockdep.c:202 __lock_acquire+0x165e/0x5500 kernel/locking/lockdep.c:4782
Modules linked in:
CPU: 1 PID: 69 Comm: kworker/1:1 Not tainted 5.11.0-rc1-syzkaller #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
Workqueue: events l2cap_chan_timeout
RIP: 0010:hlock_class kernel/locking/lockdep.c:202 [inline]
RIP: 0010:hlock_class kernel/locking/lockdep.c:191 [inline]
RIP: 0010:check_wait_context kernel/locking/lockdep.c:4506 [inline]
RIP: 0010:__lock_acquire+0x165e/0x5500 kernel/locking/lockdep.c:4782
Code: 08 84 d2 0f 85 1c 2c 00 00 8b 15 95 67 97 0b 85 d2 0f 85 5f fa ff ff 48 c7 c6 a0 a7 4b 89 48 c7 c7 c0 9d 4b 89 e8 89 7f 5b 07 <0f> 0b e9 45 fa ff ff c7 44 24 60 fe ff ff ff 41 bf 01 00 00 00 c7
RSP: 0018:ffffc900007b7900 EFLAGS: 00010082
RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
RDX: ffff88801115d280 RSI: ffffffff815b2ae5 RDI: fffff520000f6f12
RBP: ffff88801115d280 R08: 0000000000000000 R09: 0000000000000000
R10: ffffffff815abc8e R11: 0000000000000000 R12: ffff88801115dca8
R13: 00000000000013db R14: ffff888020f740a0 R15: 0000000000040000
FS: 0000000000000000(0000) GS:ffff88802cb00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffd559fd13c CR3: 000000005a46e000 CR4: 0000000000350ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
lock_acquire kernel/locking/lockdep.c:5437 [inline]
lock_acquire+0x29d/0x740 kernel/locking/lockdep.c:5402
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x2f/0x40 kernel/locking/spinlock.c:175
spin_lock_bh include/linux/spinlock.h:359 [inline]
lock_sock_nested+0x3b/0x110 net/core/sock.c:3049
l2cap_sock_teardown_cb+0xa1/0x660 net/bluetooth/l2cap_sock.c:1520
l2cap_chan_del+0xbc/0xa80 net/bluetooth/l2cap_core.c:618
l2cap_chan_close+0x1bc/0xaf0 net/bluetooth/l2cap_core.c:823
l2cap_chan_timeout+0x17e/0x2f0 net/bluetooth/l2cap_core.c:436
process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275
worker_thread+0x64c/0x1120 kernel/workqueue.c:2421
kthread+0x3b1/0x4a0 kernel/kthread.c:292
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Feb 21, 2022, 3:37:13 PM2/21/22
to syzkall...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages