possible deadlock in genl_rcv

5 views
Skip to first unread message

syzbot

unread,
Mar 4, 2020, 6:07:15 AM3/4/20
to syzkaller...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: a083db76 Linux 4.19.107
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13e5e309e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=c32f76aaadd644de
dashboard link: https://syzkaller.appspot.com/bug?extid=e2499847dde1b2f1ec33
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+e24998...@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
4.19.107-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.3/22347 is trying to acquire lock:
00000000c9afe8a4 (cb_lock){++++}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:637

but task is already holding lock:
00000000ce8784fe (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62 [inline]
00000000ce8784fe (&pipe->mutex/1){+.+.}, at: pipe_lock+0x63/0x80 fs/pipe.c:70

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&pipe->mutex/1){+.+.}:
pipe_lock_nested fs/pipe.c:62 [inline]
pipe_lock+0x63/0x80 fs/pipe.c:70
iter_file_splice_write+0x183/0xb30 fs/splice.c:700
do_splice_from fs/splice.c:852 [inline]
do_splice+0x5ea/0x1250 fs/splice.c:1154
__do_sys_splice fs/splice.c:1428 [inline]
__se_sys_splice fs/splice.c:1408 [inline]
__x64_sys_splice+0x2b5/0x320 fs/splice.c:1408
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #3 (sb_writers#4){.+.+}:
sb_start_write include/linux/fs.h:1578 [inline]
mnt_want_write+0x3a/0xb0 fs/namespace.c:360
ovl_create_object+0x96/0x290 fs/overlayfs/dir.c:600
lookup_open+0x11f6/0x19b0 fs/namei.c:3235
do_last fs/namei.c:3327 [inline]
path_openat+0x13cb/0x4200 fs/namei.c:3537
do_filp_open+0x1a1/0x280 fs/namei.c:3567
do_sys_open+0x3c0/0x500 fs/open.c:1088
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #2 (&ovl_i_mutex_dir_key[depth]){++++}:
inode_lock_shared include/linux/fs.h:757 [inline]
lookup_slow+0x43/0x70 fs/namei.c:1688
walk_component+0x70a/0x1ee0 fs/namei.c:1811
link_path_walk.part.0+0x8fd/0x1210 fs/namei.c:2142
link_path_walk fs/namei.c:2073 [inline]
path_openat+0x1ed/0x4200 fs/namei.c:3536
do_filp_open+0x1a1/0x280 fs/namei.c:3567
file_open_name+0x291/0x370 fs/open.c:1035
filp_open+0x47/0x70 fs/open.c:1055
kernel_read_file_from_path+0x78/0xf0 fs/exec.c:971
fw_get_filesystem_firmware drivers/base/firmware_loader/main.c:328 [inline]
_request_firmware+0x6f3/0x10f0 drivers/base/firmware_loader/main.c:587
request_firmware+0x33/0x50 drivers/base/firmware_loader/main.c:636
reg_reload_regdb+0x7a/0x240 net/wireless/reg.c:1073
genl_family_rcv_msg+0x627/0xc10 net/netlink/genetlink.c:602
genl_rcv_msg+0xbf/0x160 net/netlink/genetlink.c:627
netlink_rcv_skb+0x160/0x410 net/netlink/af_netlink.c:2454
genl_rcv+0x24/0x40 net/netlink/genetlink.c:638
netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline]
netlink_unicast+0x4d7/0x6a0 net/netlink/af_netlink.c:1343
netlink_sendmsg+0x80b/0xcd0 net/netlink/af_netlink.c:1908
sock_sendmsg_nosec net/socket.c:622 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:632
___sys_sendmsg+0x803/0x920 net/socket.c:2115
__sys_sendmsg+0xec/0x1b0 net/socket.c:2153
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #1 (genl_mutex){+.+.}:
genl_lock net/netlink/genetlink.c:33 [inline]
genl_lock_all net/netlink/genetlink.c:54 [inline]
genl_register_family net/netlink/genetlink.c:331 [inline]
genl_register_family+0x1c4/0x10e3 net/netlink/genetlink.c:322
genl_init+0x12/0x62 net/netlink/genetlink.c:1047
do_one_initcall+0xf1/0x734 init/main.c:883
do_initcall_level init/main.c:951 [inline]
do_initcalls init/main.c:959 [inline]
do_basic_setup init/main.c:977 [inline]
kernel_init_freeable+0x4c9/0x5bb init/main.c:1144
kernel_init+0xd/0x1c0 init/main.c:1061
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415

-> #0 (cb_lock){++++}:
down_read+0x37/0xb0 kernel/locking/rwsem.c:24
genl_rcv+0x15/0x40 net/netlink/genetlink.c:637
netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline]
netlink_unicast+0x4d7/0x6a0 net/netlink/af_netlink.c:1343
netlink_sendmsg+0x80b/0xcd0 net/netlink/af_netlink.c:1908
sock_sendmsg_nosec net/socket.c:622 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:632
sock_no_sendpage+0xf8/0x140 net/core/sock.c:2642
kernel_sendpage+0x82/0xd0 net/socket.c:3378
sock_sendpage+0x84/0xa0 net/socket.c:847
pipe_to_sendpage+0x263/0x320 fs/splice.c:452
splice_from_pipe_feed fs/splice.c:503 [inline]
__splice_from_pipe+0x38f/0x7a0 fs/splice.c:627
splice_from_pipe+0xd9/0x140 fs/splice.c:662
do_splice_from fs/splice.c:852 [inline]
do_splice+0x5ea/0x1250 fs/splice.c:1154
__do_sys_splice fs/splice.c:1428 [inline]
__se_sys_splice fs/splice.c:1408 [inline]
__x64_sys_splice+0x2b5/0x320 fs/splice.c:1408
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
cb_lock --> sb_writers#4 --> &pipe->mutex/1

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&pipe->mutex/1);
lock(sb_writers#4);
lock(&pipe->mutex/1);
lock(cb_lock);

*** DEADLOCK ***

1 lock held by syz-executor.3/22347:
#0: 00000000ce8784fe (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62 [inline]
#0: 00000000ce8784fe (&pipe->mutex/1){+.+.}, at: pipe_lock+0x63/0x80 fs/pipe.c:70

stack backtrace:
CPU: 0 PID: 22347 Comm: syz-executor.3 Not tainted 4.19.107-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x188/0x20d lib/dump_stack.c:118
print_circular_bug.isra.0.cold+0x1c4/0x282 kernel/locking/lockdep.c:1221
check_prev_add kernel/locking/lockdep.c:1861 [inline]
check_prevs_add kernel/locking/lockdep.c:1974 [inline]
validate_chain kernel/locking/lockdep.c:2415 [inline]
__lock_acquire+0x2e19/0x49c0 kernel/locking/lockdep.c:3411
lock_acquire+0x170/0x400 kernel/locking/lockdep.c:3903
down_read+0x37/0xb0 kernel/locking/rwsem.c:24
genl_rcv+0x15/0x40 net/netlink/genetlink.c:637
netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline]
netlink_unicast+0x4d7/0x6a0 net/netlink/af_netlink.c:1343
netlink_sendmsg+0x80b/0xcd0 net/netlink/af_netlink.c:1908
sock_sendmsg_nosec net/socket.c:622 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:632
sock_no_sendpage+0xf8/0x140 net/core/sock.c:2642
kernel_sendpage+0x82/0xd0 net/socket.c:3378
sock_sendpage+0x84/0xa0 net/socket.c:847
pipe_to_sendpage+0x263/0x320 fs/splice.c:452
splice_from_pipe_feed fs/splice.c:503 [inline]
__splice_from_pipe+0x38f/0x7a0 fs/splice.c:627
splice_from_pipe+0xd9/0x140 fs/splice.c:662
do_splice_from fs/splice.c:852 [inline]
do_splice+0x5ea/0x1250 fs/splice.c:1154
__do_sys_splice fs/splice.c:1428 [inline]
__se_sys_splice fs/splice.c:1408 [inline]
__x64_sys_splice+0x2b5/0x320 fs/splice.c:1408
do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x45c479
Code: ad b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 7b b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fec83c94c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000113
RAX: ffffffffffffffda RBX: 00007fec83c956d4 RCX: 000000000045c479
RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 000000000076bfc0 R08: 000000000004ffe0 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 0000000000000b9f R14: 00000000004ce270 R15: 000000000076bfcc
audit: type=1400 audit(1583320017.711:489): avc: denied { map } for pid=23546 comm="syz-executor.1" path="socket:[161730]" dev="sockfs" ino=161730 scontext=unconfined_u:system_r:insmod_t:s0-s0:c0.c1023 tcontext=unconfined_u:system_r:insmod_t:s0-s0:c0.c1023 tclass=tcp_socket permissive=1
selinux_nlmsg_perm: 50 callbacks suppressed
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
--map-set only usable from mangle table
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
gfs2: not a GFS2 filesystem
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
gfs2: not a GFS2 filesystem
--map-set only usable from mangle table
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
gfs2: not a GFS2 filesystem
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=0 sclass=netlink_route_socket pig=23687 comm=syz-executor.3
--map-set only usable from mangle table
gfs2: not a GFS2 filesystem
--map-set only usable from mangle table
gfs2: not a GFS2 filesystem


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 2, 2020, 7:07:12 AM7/2/20
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages