[v6.6] possible deadlock in tty_buffer_flush

0 views
Skip to first unread message

syzbot

unread,
4:08 AM (14 hours ago) 4:08 AM
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 9760bf04666d Linux 6.6.135
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1474f1ba580000
kernel config: https://syzkaller.appspot.com/x/.config?x=c5b35c4db8465904
dashboard link: https://syzkaller.appspot.com/bug?extid=0aab4971ac5ae130b19c
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/a685913d05a7/disk-9760bf04.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/a3c1d21d4bca/vmlinux-9760bf04.xz
kernel image: https://storage.googleapis.com/syzbot-assets/09178887ef0d/bzImage-9760bf04.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0aab49...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kworker/0:1/9 is trying to acquire lock:
ffff8880186510b8 (&buf->lock){+.+.}-{3:3}, at: tty_buffer_flush+0x79/0x3f0 drivers/tty/tty_buffer.c:229

but task is already holding lock:
ffffffff8d126400 (console_lock){+.+.}-{0:0}, at: vc_SAK+0x28/0x220 drivers/tty/vt/vt_ioctl.c:985

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (console_lock){+.+.}-{0:0}:
console_lock+0x164/0x1b0 kernel/printk/printk.c:2686
con_flush_chars+0x4b/0x280 drivers/tty/vt/vt.c:3315
__receive_buf drivers/tty/n_tty.c:1650 [inline]
n_tty_receive_buf_common+0xc77/0x12d0 drivers/tty/n_tty.c:1745
tiocsti+0x221/0x2a0 drivers/tty/tty_io.c:2291
tty_ioctl+0x62e/0xdd0 drivers/tty/tty_io.c:2693
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:871 [inline]
__se_sys_ioctl+0xfd/0x170 fs/ioctl.c:857
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #1 (&tty->termios_rwsem){++++}-{3:3}:
down_write+0x97/0x200 kernel/locking/rwsem.c:1573
n_tty_flush_buffer+0x30/0x230 drivers/tty/n_tty.c:363
tty_buffer_flush+0x328/0x3f0 drivers/tty/tty_buffer.c:241
tty_ldisc_flush+0x6b/0xc0 drivers/tty/tty_ldisc.c:388
tty_port_close_start+0x2da/0x540 drivers/tty/tty_port.c:660
tty_port_close+0x2a/0x140 drivers/tty/tty_port.c:715
tty_release+0x387/0x1600 drivers/tty/tty_io.c:1752
__fput+0x234/0x970 fs/file_table.c:384
__do_sys_close fs/open.c:1573 [inline]
__se_sys_close+0x15f/0x220 fs/open.c:1558
do_syscall_x64 arch/x86/entry/common.c:46 [inline]
do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&buf->lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2df1/0x7d40 kernel/locking/lockdep.c:5137
lock_acquire+0x19e/0x420 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x136/0xcc0 kernel/locking/mutex.c:747
tty_buffer_flush+0x79/0x3f0 drivers/tty/tty_buffer.c:229
__do_SAK+0x135/0x6a0 drivers/tty/tty_io.c:3014
vc_SAK+0x78/0x220 drivers/tty/vt/vt_ioctl.c:995
process_one_work kernel/workqueue.c:2653 [inline]
process_scheduled_works+0xa5d/0x15d0 kernel/workqueue.c:2730
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2811
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

other info that might help us debug this:

Chain exists of:
&buf->lock --> &tty->termios_rwsem --> console_lock

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(console_lock);
lock(&tty->termios_rwsem);
lock(console_lock);
lock(&buf->lock);

*** DEADLOCK ***

3 locks held by kworker/0:1/9:
#0: ffff888017c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#0: ffff888017c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#1: ffffc900000e7d00 ((work_completion)(&vc_cons[currcons].SAK_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2628 [inline]
#1: ffffc900000e7d00 ((work_completion)(&vc_cons[currcons].SAK_work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2730
#2: ffffffff8d126400 (console_lock){+.+.}-{0:0}, at: vc_SAK+0x28/0x220 drivers/tty/vt/vt_ioctl.c:985

stack backtrace:
CPU: 0 PID: 9 Comm: kworker/0:1 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Workqueue: events vc_SAK
Call Trace:
<TASK>
dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
check_noncircular+0x2fc/0x400 kernel/locking/lockdep.c:2187
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain kernel/locking/lockdep.c:3869 [inline]
__lock_acquire+0x2df1/0x7d40 kernel/locking/lockdep.c:5137
lock_acquire+0x19e/0x420 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x136/0xcc0 kernel/locking/mutex.c:747
tty_buffer_flush+0x79/0x3f0 drivers/tty/tty_buffer.c:229
__do_SAK+0x135/0x6a0 drivers/tty/tty_io.c:3014
vc_SAK+0x78/0x220 drivers/tty/vt/vt_ioctl.c:995
process_one_work kernel/workqueue.c:2653 [inline]
process_scheduled_works+0xa5d/0x15d0 kernel/workqueue.c:2730
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2811
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages