possible deadlock in do_journal_begin_r

10 views
Skip to first unread message

syzbot

unread,
Sep 27, 2022, 6:45:48 AM9/27/22
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 3f8a27f9e27b Linux 4.19.211
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17d0dbc4880000
kernel config: https://syzkaller.appspot.com/x/.config?x=9b9277b418617afe
dashboard link: https://syzkaller.appspot.com/bug?extid=62c10f0bd6c14e9fffec
compiler: gcc version 10.2.1 20210110 (Debian 10.2.1-6)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=13df9a4c880000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15cfc588880000

Downloadable assets:
disk image: https://storage.googleapis.com/98c0bdb4abb3/disk-3f8a27f9.raw.xz
vmlinux: https://storage.googleapis.com/ea228ff02669/vmlinux-3f8a27f9.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+62c10f...@syzkaller.appspotmail.com

reiserfs: enabling write barrier flush mode
REISERFS warning (device loop0): jdm-20006 create_privroot: xattrs/ACLs enabled and couldn't find/create .reiserfs_priv. Failing mount.
======================================================
WARNING: possible circular locking dependency detected
4.19.211-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor266/8102 is trying to acquire lock:
00000000c05e6d99 (&journal->j_mutex){+.+.}, at: reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:816 [inline]
00000000c05e6d99 (&journal->j_mutex){+.+.}, at: lock_journal fs/reiserfs/journal.c:538 [inline]
00000000c05e6d99 (&journal->j_mutex){+.+.}, at: do_journal_begin_r+0x298/0x10b0 fs/reiserfs/journal.c:3057

but task is already holding lock:
0000000034ca88b5 (sb_writers#11){.+.+}, at: sb_start_write include/linux/fs.h:1579 [inline]
0000000034ca88b5 (sb_writers#11){.+.+}, at: mnt_want_write_file+0x63/0x1d0 fs/namespace.c:418

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (sb_writers#11){.+.+}:
sb_start_write include/linux/fs.h:1579 [inline]
mnt_want_write_file+0x63/0x1d0 fs/namespace.c:418
reiserfs_ioctl+0x1a7/0x9a0 fs/reiserfs/ioctl.c:110
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:501 [inline]
do_vfs_ioctl+0xcdb/0x12e0 fs/ioctl.c:688
ksys_ioctl+0x9b/0xc0 fs/ioctl.c:705
__do_sys_ioctl fs/ioctl.c:712 [inline]
__se_sys_ioctl fs/ioctl.c:710 [inline]
__x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:710
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #1 (&sbi->lock){+.+.}:
reiserfs_write_lock_nested+0x65/0xe0 fs/reiserfs/lock.c:78
reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:817 [inline]
lock_journal fs/reiserfs/journal.c:538 [inline]
do_journal_begin_r+0x2a2/0x10b0 fs/reiserfs/journal.c:3057
journal_begin+0x162/0x400 fs/reiserfs/journal.c:3265
reiserfs_remount+0x790/0x1540 fs/reiserfs/super.c:1572
do_remount_sb+0x1a0/0x6a0 fs/super.c:888
do_remount fs/namespace.c:2313 [inline]
do_mount+0x1a62/0x2f50 fs/namespace.c:2813
ksys_mount+0xcf/0x130 fs/namespace.c:3038
__do_sys_mount fs/namespace.c:3052 [inline]
__se_sys_mount fs/namespace.c:3049 [inline]
__x64_sys_mount+0xba/0x150 fs/namespace.c:3049
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (&journal->j_mutex){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:937 [inline]
__mutex_lock+0xd7/0x1190 kernel/locking/mutex.c:1078
reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:816 [inline]
lock_journal fs/reiserfs/journal.c:538 [inline]
do_journal_begin_r+0x298/0x10b0 fs/reiserfs/journal.c:3057
journal_begin+0x162/0x400 fs/reiserfs/journal.c:3265
reiserfs_dirty_inode+0xff/0x250 fs/reiserfs/super.c:716
__mark_inode_dirty+0x16b/0x1140 fs/fs-writeback.c:2164
mark_inode_dirty include/linux/fs.h:2086 [inline]
reiserfs_ioctl+0x7dc/0x9a0 fs/reiserfs/ioctl.c:118
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:501 [inline]
do_vfs_ioctl+0xcdb/0x12e0 fs/ioctl.c:688
ksys_ioctl+0x9b/0xc0 fs/ioctl.c:705
__do_sys_ioctl fs/ioctl.c:712 [inline]
__se_sys_ioctl fs/ioctl.c:710 [inline]
__x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:710
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
&journal->j_mutex --> &sbi->lock --> sb_writers#11

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(sb_writers#11);
lock(&sbi->lock);
lock(sb_writers#11);
lock(&journal->j_mutex);

*** DEADLOCK ***

1 lock held by syz-executor266/8102:
#0: 0000000034ca88b5 (sb_writers#11){.+.+}, at: sb_start_write include/linux/fs.h:1579 [inline]
#0: 0000000034ca88b5 (sb_writers#11){.+.+}, at: mnt_want_write_file+0x63/0x1d0 fs/namespace.c:418

stack backtrace:
CPU: 0 PID: 8102 Comm: syz-executor266 Not tainted 4.19.211-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1fc/0x2ef lib/dump_stack.c:118
print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1222
check_prev_add kernel/locking/lockdep.c:1866 [inline]
check_prevs_add kernel/locking/lockdep.c:1979 [inline]
validate_chain kernel/locking/lockdep.c:2420 [inline]
__lock_acquire+0x30c9/0x3ff0 kernel/locking/lockdep.c:3416
lock_acquire+0x170/0x3c0 kernel/locking/lockdep.c:3908
__mutex_lock_common kernel/locking/mutex.c:937 [inline]
__mutex_lock+0xd7/0x1190 kernel/locking/mutex.c:1078
reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:816 [inline]
lock_journal fs/reiserfs/journal.c:538 [inline]
do_journal_begin_r+0x298/0x10b0 fs/reiserfs/journal.c:3057
journal_begin+0x162/0x400 fs/reiserfs/journal.c:3265
reiserfs_dirty_inode+0xff/0x250 fs/reiserfs/super.c:716
__mark_inode_dirty+0x16b/0x1140 fs/fs-writeback.c:2164
mark_inode_dirty include/linux/fs.h:2086 [inline]
reiserfs_ioctl+0x7dc/0x9a0 fs/reiserfs/ioctl.c:118
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:501 [inline]
do_vfs_ioctl+0xcdb/0x12e0 fs/ioctl.c:688
ksys_ioctl+0x9b/0xc0 fs/ioctl.c:705
__do_sys_ioctl fs/ioctl.c:712 [inline]
__se_sys_ioctl fs/ioctl.c:710 [inline]
__x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:710
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fe7d32368c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fffd177d4a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fe7d32368c9
RDX: 0000000020000080 RSI: 0000000040087602 RDI: 0000000000000005
RBP: 0000000000000000 R08: 00007fe7d32a3ec0 R09: 00007fe7d32a3ec0
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fffd177d4d0
R13: 0000000000000000 R14: 431bde82d7b634db R15: 0000000000000000
REISERFS (device loop0): Created .reiserfs_priv - reserved for xattr storage.


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this issue, for details see:
https://goo.gl/tpsmEJ#testing-patches

syzbot

unread,
Oct 5, 2022, 10:07:42 AM10/5/22
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 9d5c0b3a8e1a Linux 4.14.295
git tree: linux-4.14.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1025c21c880000
kernel config: https://syzkaller.appspot.com/x/.config?x=746c079015a92425
dashboard link: https://syzkaller.appspot.com/bug?extid=7cf0f1c43fd83b12a4b4
compiler: gcc version 10.2.1 20210110 (Debian 10.2.1-6)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1534f368880000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17157034880000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/ed6fcf5895a2/disk-9d5c0b3a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/341aa3534116/vmlinux-9d5c0b3a.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/710931f58470/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+7cf0f1...@syzkaller.appspotmail.com

REISERFS (device loop0): journal params: device loop0, size 512, journal first block 18, max trans len 256, max batch 225, max commit age 30, max trans age 30
REISERFS (device loop0): checking transaction log (loop0)
REISERFS (device loop0): Using rupasov hash to sort names
REISERFS (device loop0): Created .reiserfs_priv - reserved for xattr storage.
======================================================
WARNING: possible circular locking dependency detected
4.14.295-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor395/7990 is trying to acquire lock:
(&journal->j_mutex){+.+.}, at: [<ffffffff81b3272b>] reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:816 [inline]
(&journal->j_mutex){+.+.}, at: [<ffffffff81b3272b>] lock_journal fs/reiserfs/journal.c:537 [inline]
(&journal->j_mutex){+.+.}, at: [<ffffffff81b3272b>] do_journal_begin_r+0x26b/0xde0 fs/reiserfs/journal.c:3054

but task is already holding lock:
(sb_writers#10){.+.+}, at: [<ffffffff818e09ad>] sb_start_write include/linux/fs.h:1551 [inline]
(sb_writers#10){.+.+}, at: [<ffffffff818e09ad>] mnt_want_write_file+0xfd/0x3b0 fs/namespace.c:497

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (sb_writers#10){.+.+}:
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline]
percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
__sb_start_write+0x64/0x260 fs/super.c:1342
sb_start_write include/linux/fs.h:1551 [inline]
mnt_want_write_file+0xfd/0x3b0 fs/namespace.c:497
reiserfs_ioctl+0x18e/0x8b0 fs/reiserfs/ioctl.c:110
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:500 [inline]
do_vfs_ioctl+0x75a/0xff0 fs/ioctl.c:684
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x7f/0xb0 fs/ioctl.c:692
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb

-> #1 (&sbi->lock){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
reiserfs_write_lock_nested+0x59/0xd0 fs/reiserfs/lock.c:78
reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:817 [inline]
lock_journal fs/reiserfs/journal.c:537 [inline]
do_journal_begin_r+0x276/0xde0 fs/reiserfs/journal.c:3054
journal_begin+0x162/0x3d0 fs/reiserfs/journal.c:3262
reiserfs_fill_super+0x18f4/0x2990 fs/reiserfs/super.c:2117
mount_bdev+0x2b3/0x360 fs/super.c:1134
mount_fs+0x92/0x2a0 fs/super.c:1237
vfs_kern_mount.part.0+0x5b/0x470 fs/namespace.c:1046
vfs_kern_mount fs/namespace.c:1036 [inline]
do_new_mount fs/namespace.c:2572 [inline]
do_mount+0xe65/0x2a30 fs/namespace.c:2905
SYSC_mount fs/namespace.c:3121 [inline]
SyS_mount+0xa8/0x120 fs/namespace.c:3098
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb

-> #0 (&journal->j_mutex){+.+.}:
lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:816 [inline]
lock_journal fs/reiserfs/journal.c:537 [inline]
do_journal_begin_r+0x26b/0xde0 fs/reiserfs/journal.c:3054
journal_begin+0x162/0x3d0 fs/reiserfs/journal.c:3262
reiserfs_dirty_inode+0xd9/0x200 fs/reiserfs/super.c:716
__mark_inode_dirty+0x11e/0xf40 fs/fs-writeback.c:2134
mark_inode_dirty include/linux/fs.h:2026 [inline]
reiserfs_ioctl+0x6f6/0x8b0 fs/reiserfs/ioctl.c:118
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:500 [inline]
do_vfs_ioctl+0x75a/0xff0 fs/ioctl.c:684
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x7f/0xb0 fs/ioctl.c:692
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb

other info that might help us debug this:

Chain exists of:
&journal->j_mutex --> &sbi->lock --> sb_writers#10

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(sb_writers#10);
lock(&sbi->lock);
lock(sb_writers#10);
lock(&journal->j_mutex);

*** DEADLOCK ***

1 lock held by syz-executor395/7990:
#0: (sb_writers#10){.+.+}, at: [<ffffffff818e09ad>] sb_start_write include/linux/fs.h:1551 [inline]
#0: (sb_writers#10){.+.+}, at: [<ffffffff818e09ad>] mnt_want_write_file+0xfd/0x3b0 fs/namespace.c:497

stack backtrace:
CPU: 0 PID: 7990 Comm: syz-executor395 Not tainted 4.14.295-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/22/2022
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x1b2/0x281 lib/dump_stack.c:58
print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258
check_prev_add kernel/locking/lockdep.c:1905 [inline]
check_prevs_add kernel/locking/lockdep.c:2022 [inline]
validate_chain kernel/locking/lockdep.c:2464 [inline]
__lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491
lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
reiserfs_mutex_lock_safe fs/reiserfs/reiserfs.h:816 [inline]
lock_journal fs/reiserfs/journal.c:537 [inline]
do_journal_begin_r+0x26b/0xde0 fs/reiserfs/journal.c:3054
journal_begin+0x162/0x3d0 fs/reiserfs/journal.c:3262
reiserfs_dirty_inode+0xd9/0x200 fs/reiserfs/super.c:716
__mark_inode_dirty+0x11e/0xf40 fs/fs-writeback.c:2134
mark_inode_dirty include/linux/fs.h:2026 [inline]
reiserfs_ioctl+0x6f6/0x8b0 fs/reiserfs/ioctl.c:118
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:500 [inline]
do_vfs_ioctl+0x75a/0xff0 fs/ioctl.c:684
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x7f/0xb0 fs/ioctl.c:692
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb
RIP: 0033:0x7f541cc14899
RSP: 002b:00007ffd292efcd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f541cc14899
RDX: 0000000020000080 RSI: 0000000040087602 RDI: 0000000000000005
RBP: 0000000000000000 R08: 0000000000000000 R09: 00007f541cc82ec0
R10: 00007ffd292efba0 R11: 0000000000000246 R12: 00007ffd292efd00
R13: 0000000000000000 R14: 431bde82d7b634db R15: 0000000000000000


Reply all
Reply to author
Forward
0 new messages