fs, net: deadlock between bind/splice on af_unix

74 views
Skip to first unread message

Dmitry Vyukov

unread,
Dec 8, 2016, 9:47:32 AM12/8/16
to Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, Cong Wang, netdev, Eric Dumazet, syzkaller
Hello,

I am getting the following deadlock reports while running syzkaller
fuzzer on 318c8932ddec5c1c26a4af0f3c053784841c598e (Dec 7).


[ INFO: possible circular locking dependency detected ]
4.9.0-rc8+ #77 Not tainted
-------------------------------------------------------
syz-executor0/3155 is trying to acquire lock:
(&u->bindlock){+.+.+.}, at: [<ffffffff871bca1a>]
unix_autobind.isra.26+0xca/0x8a0 net/unix/af_unix.c:852
but task is already holding lock:
(&pipe->mutex/1){+.+.+.}, at: [< inline >] pipe_lock_nested
fs/pipe.c:66
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff81a8ea4b>]
pipe_lock+0x5b/0x70 fs/pipe.c:74
which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

[ 202.103497] [< inline >] validate_chain
kernel/locking/lockdep.c:2265
[ 202.103497] [<ffffffff81569576>]
__lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[ 202.103497] [<ffffffff8156b672>] lock_acquire+0x2a2/0x790
kernel/locking/lockdep.c:3749
[ 202.103497] [< inline >] __mutex_lock_common
kernel/locking/mutex.c:521
[ 202.103497] [<ffffffff88195bcf>]
mutex_lock_nested+0x23f/0xf20 kernel/locking/mutex.c:621
[ 202.103497] [< inline >] pipe_lock_nested fs/pipe.c:66
[ 202.103497] [<ffffffff81a8ea4b>] pipe_lock+0x5b/0x70 fs/pipe.c:74
[ 202.103497] [<ffffffff81b451f7>]
iter_file_splice_write+0x267/0xfa0 fs/splice.c:717
[ 202.103497] [< inline >] do_splice_from fs/splice.c:869
[ 202.103497] [< inline >] do_splice fs/splice.c:1160
[ 202.103497] [< inline >] SYSC_splice fs/splice.c:1410
[ 202.103497] [<ffffffff81b473c7>] SyS_splice+0x7d7/0x16a0
fs/splice.c:1393
[ 202.103497] [<ffffffff881a5f85>] entry_SYSCALL_64_fastpath+0x23/0xc6

[ 202.103497] [< inline >] validate_chain
kernel/locking/lockdep.c:2265
[ 202.103497] [<ffffffff81569576>]
__lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[ 202.103497] [<ffffffff8156b672>] lock_acquire+0x2a2/0x790
kernel/locking/lockdep.c:3749
[ 202.103497] [< inline >]
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35
[ 202.103497] [< inline >] percpu_down_read
include/linux/percpu-rwsem.h:58
[ 202.103497] [<ffffffff81a7bb33>]
__sb_start_write+0x193/0x2a0 fs/super.c:1252
[ 202.103497] [< inline >] sb_start_write
include/linux/fs.h:1549
[ 202.103497] [<ffffffff81af9954>] mnt_want_write+0x44/0xb0
fs/namespace.c:389
[ 202.103497] [<ffffffff81ab09f6>] filename_create+0x156/0x620
fs/namei.c:3598
[ 202.103497] [<ffffffff81ab0ef8>] kern_path_create+0x38/0x50
fs/namei.c:3644
[ 202.103497] [< inline >] unix_mknod net/unix/af_unix.c:967
[ 202.103497] [<ffffffff871c0e11>] unix_bind+0x4d1/0xe60
net/unix/af_unix.c:1035
[ 202.103497] [<ffffffff86a76b7e>] SYSC_bind+0x20e/0x4c0
net/socket.c:1382
[ 202.103497] [<ffffffff86a7a509>] SyS_bind+0x29/0x30 net/socket.c:1368
[ 202.103497] [<ffffffff881a5f85>] entry_SYSCALL_64_fastpath+0x23/0xc6

[ 202.103497] [< inline >] check_prev_add
kernel/locking/lockdep.c:1828
[ 202.103497] [<ffffffff8156309b>]
check_prevs_add+0xaab/0x1c20 kernel/locking/lockdep.c:1938
[ 202.103497] [< inline >] validate_chain
kernel/locking/lockdep.c:2265
[ 202.103497] [<ffffffff81569576>]
__lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[ 202.103497] [<ffffffff8156b672>] lock_acquire+0x2a2/0x790
kernel/locking/lockdep.c:3749
[ 202.103497] [< inline >] __mutex_lock_common
kernel/locking/mutex.c:521
[ 202.103497] [<ffffffff88196b82>]
mutex_lock_interruptible_nested+0x2d2/0x11d0
kernel/locking/mutex.c:650
[ 202.103497] [<ffffffff871bca1a>]
unix_autobind.isra.26+0xca/0x8a0 net/unix/af_unix.c:852
[ 202.103497] [<ffffffff871c76dd>]
unix_dgram_sendmsg+0x105d/0x1730 net/unix/af_unix.c:1667
[ 202.103497] [<ffffffff871c7ea8>]
unix_seqpacket_sendmsg+0xf8/0x170 net/unix/af_unix.c:2071
[ 202.103497] [< inline >] sock_sendmsg_nosec net/socket.c:621
[ 202.103497] [<ffffffff86a7618f>] sock_sendmsg+0xcf/0x110
net/socket.c:631
[ 202.103497] [<ffffffff86a7683c>] kernel_sendmsg+0x4c/0x60
net/socket.c:639
[ 202.103497] [<ffffffff86a8101d>]
sock_no_sendpage+0x20d/0x310 net/core/sock.c:2321
[ 202.103497] [<ffffffff86a74c95>] kernel_sendpage+0x95/0xf0
net/socket.c:3289
[ 202.103497] [<ffffffff86a74d92>] sock_sendpage+0xa2/0xd0
net/socket.c:775
[ 202.103497] [<ffffffff81b3ee1e>]
pipe_to_sendpage+0x2ae/0x390 fs/splice.c:469
[ 202.103497] [< inline >] splice_from_pipe_feed fs/splice.c:520
[ 202.103497] [<ffffffff81b42f3f>]
__splice_from_pipe+0x31f/0x750 fs/splice.c:644
[ 202.103497] [<ffffffff81b4665c>]
splice_from_pipe+0x1dc/0x300 fs/splice.c:679
[ 202.103497] [<ffffffff81b467c5>]
generic_splice_sendpage+0x45/0x60 fs/splice.c:850
[ 202.103497] [< inline >] do_splice_from fs/splice.c:869
[ 202.103497] [< inline >] do_splice fs/splice.c:1160
[ 202.103497] [< inline >] SYSC_splice fs/splice.c:1410
[ 202.103497] [<ffffffff81b473c7>] SyS_splice+0x7d7/0x16a0
fs/splice.c:1393
[ 202.103497] [<ffffffff881a5f85>] entry_SYSCALL_64_fastpath+0x23/0xc6

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&pipe->mutex/1);
lock(sb_writers#5);
lock(&pipe->mutex/1);
lock(&u->bindlock);

*** DEADLOCK ***

1 lock held by syz-executor0/3155:
#0: (&pipe->mutex/1){+.+.+.}, at: [< inline >]
pipe_lock_nested fs/pipe.c:66
#0: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff81a8ea4b>]
pipe_lock+0x5b/0x70 fs/pipe.c:74

stack backtrace:
CPU: 3 PID: 3155 Comm: syz-executor0 Not tainted 4.9.0-rc8+ #77
Hardware name: Google Google/Google, BIOS Google 01/01/2011
ffff88004b1fe288 ffffffff834c44f9 ffffffff00000003 1ffff1000963fbe4
ffffed000963fbdc 0000000041b58ab3 ffffffff895816f0 ffffffff834c420b
0000000000000000 0000000000000000 0000000000000000 0000000000000000
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffff834c44f9>] dump_stack+0x2ee/0x3f5 lib/dump_stack.c:51
[<ffffffff81560cb0>] print_circular_bug+0x310/0x3c0
kernel/locking/lockdep.c:1202
[< inline >] check_prev_add kernel/locking/lockdep.c:1828
[<ffffffff8156309b>] check_prevs_add+0xaab/0x1c20 kernel/locking/lockdep.c:1938
[< inline >] validate_chain kernel/locking/lockdep.c:2265
[<ffffffff81569576>] __lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[<ffffffff8156b672>] lock_acquire+0x2a2/0x790 kernel/locking/lockdep.c:3749
[< inline >] __mutex_lock_common kernel/locking/mutex.c:521
[<ffffffff88196b82>] mutex_lock_interruptible_nested+0x2d2/0x11d0
kernel/locking/mutex.c:650
[<ffffffff871bca1a>] unix_autobind.isra.26+0xca/0x8a0 net/unix/af_unix.c:852
[<ffffffff871c76dd>] unix_dgram_sendmsg+0x105d/0x1730 net/unix/af_unix.c:1667
[<ffffffff871c7ea8>] unix_seqpacket_sendmsg+0xf8/0x170 net/unix/af_unix.c:2071
[< inline >] sock_sendmsg_nosec net/socket.c:621
[<ffffffff86a7618f>] sock_sendmsg+0xcf/0x110 net/socket.c:631
[<ffffffff86a7683c>] kernel_sendmsg+0x4c/0x60 net/socket.c:639
[<ffffffff86a8101d>] sock_no_sendpage+0x20d/0x310 net/core/sock.c:2321
[<ffffffff86a74c95>] kernel_sendpage+0x95/0xf0 net/socket.c:3289
[<ffffffff86a74d92>] sock_sendpage+0xa2/0xd0 net/socket.c:775
[<ffffffff81b3ee1e>] pipe_to_sendpage+0x2ae/0x390 fs/splice.c:469
[< inline >] splice_from_pipe_feed fs/splice.c:520
[<ffffffff81b42f3f>] __splice_from_pipe+0x31f/0x750 fs/splice.c:644
[<ffffffff81b4665c>] splice_from_pipe+0x1dc/0x300 fs/splice.c:679
[<ffffffff81b467c5>] generic_splice_sendpage+0x45/0x60 fs/splice.c:850
[< inline >] do_splice_from fs/splice.c:869
[< inline >] do_splice fs/splice.c:1160
[< inline >] SYSC_splice fs/splice.c:1410
[<ffffffff81b473c7>] SyS_splice+0x7d7/0x16a0 fs/splice.c:1393
[<ffffffff881a5f85>] entry_SYSCALL_64_fastpath+0x23/0xc6

Dmitry Vyukov

unread,
Dec 8, 2016, 11:31:05 AM12/8/16
to Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, Cong Wang, netdev, Eric Dumazet, syzkaller
Seems to be the same, but detected in the context of the second thread:

[ INFO: possible circular locking dependency detected ]
4.9.0-rc8+ #77 Not tainted
-------------------------------------------------------
syz-executor3/24365 is trying to acquire lock:
(&pipe->mutex/1){+.+.+.}, at: [< inline >] pipe_lock_nested
fs/pipe.c:66
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff81a8ea4b>]
pipe_lock+0x5b/0x70 fs/pipe.c:74
but task is already holding lock:
(sb_writers#5){.+.+.+}, at: [< inline >] file_start_write
include/linux/fs.h:2592
(sb_writers#5){.+.+.+}, at: [< inline >] do_splice fs/splice.c:1159
(sb_writers#5){.+.+.+}, at: [< inline >] SYSC_splice fs/splice.c:1410
(sb_writers#5){.+.+.+}, at: [<ffffffff81b47d9f>]
SyS_splice+0x11af/0x16a0 fs/splice.c:1393
which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

[ 131.709013] [< inline >] validate_chain
kernel/locking/lockdep.c:2265
[ 131.709013] [<ffffffff81569576>]
__lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[ 131.709013] [<ffffffff8156b672>] lock_acquire+0x2a2/0x790
kernel/locking/lockdep.c:3749
[ 131.709013] [< inline >]
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35
[ 131.709013] [< inline >] percpu_down_read
include/linux/percpu-rwsem.h:58
[ 131.709013] [<ffffffff81a7bb33>]
__sb_start_write+0x193/0x2a0 fs/super.c:1252
[ 131.709013] [< inline >] sb_start_write
include/linux/fs.h:1549
[ 131.709013] [<ffffffff81af9954>] mnt_want_write+0x44/0xb0
fs/namespace.c:389
[ 131.709013] [<ffffffff81ab09f6>] filename_create+0x156/0x620
fs/namei.c:3598
[ 131.709013] [<ffffffff81ab0ef8>] kern_path_create+0x38/0x50
fs/namei.c:3644
[ 131.709013] [< inline >] unix_mknod net/unix/af_unix.c:967
[ 131.709013] [<ffffffff871c0e11>] unix_bind+0x4d1/0xe60
net/unix/af_unix.c:1035
[ 131.709013] [<ffffffff86a76b7e>] SYSC_bind+0x20e/0x4c0
net/socket.c:1382
[ 131.709013] [<ffffffff86a7a509>] SyS_bind+0x29/0x30 net/socket.c:1368
[ 131.709013] [<ffffffff881a5f85>] entry_SYSCALL_64_fastpath+0x23/0xc6

[ 131.709013] [< inline >] validate_chain
kernel/locking/lockdep.c:2265
[ 131.709013] [<ffffffff81569576>]
__lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[ 131.709013] [<ffffffff8156b672>] lock_acquire+0x2a2/0x790
kernel/locking/lockdep.c:3749
[ 131.709013] [< inline >] __mutex_lock_common
kernel/locking/mutex.c:521
[ 131.709013] [<ffffffff88196b82>]
mutex_lock_interruptible_nested+0x2d2/0x11d0
kernel/locking/mutex.c:650
[ 131.709013] [<ffffffff871bca1a>]
unix_autobind.isra.26+0xca/0x8a0 net/unix/af_unix.c:852
[ 131.709013] [<ffffffff871c76dd>]
unix_dgram_sendmsg+0x105d/0x1730 net/unix/af_unix.c:1667
[ 131.709013] [<ffffffff871c7ea8>]
unix_seqpacket_sendmsg+0xf8/0x170 net/unix/af_unix.c:2071
[ 131.709013] [< inline >] sock_sendmsg_nosec net/socket.c:621
[ 131.709013] [<ffffffff86a7618f>] sock_sendmsg+0xcf/0x110
net/socket.c:631
[ 131.709013] [<ffffffff86a7683c>] kernel_sendmsg+0x4c/0x60
net/socket.c:639
[ 131.709013] [<ffffffff86a8101d>]
sock_no_sendpage+0x20d/0x310 net/core/sock.c:2321
[ 131.709013] [<ffffffff86a74c95>] kernel_sendpage+0x95/0xf0
net/socket.c:3289
[ 131.709013] [<ffffffff86a74d92>] sock_sendpage+0xa2/0xd0
net/socket.c:775
[ 131.709013] [<ffffffff81b3ee1e>]
pipe_to_sendpage+0x2ae/0x390 fs/splice.c:469
[ 131.709013] [< inline >] splice_from_pipe_feed fs/splice.c:520
[ 131.709013] [<ffffffff81b42f3f>]
__splice_from_pipe+0x31f/0x750 fs/splice.c:644
[ 131.709013] [<ffffffff81b4665c>]
splice_from_pipe+0x1dc/0x300 fs/splice.c:679
[ 131.709013] [<ffffffff81b467c5>]
generic_splice_sendpage+0x45/0x60 fs/splice.c:850
[ 131.709013] [< inline >] do_splice_from fs/splice.c:869
[ 131.709013] [< inline >] do_splice fs/splice.c:1160
[ 131.709013] [< inline >] SYSC_splice fs/splice.c:1410
[ 131.709013] [<ffffffff81b473c7>] SyS_splice+0x7d7/0x16a0
fs/splice.c:1393
[ 131.709013] [<ffffffff881a5f85>] entry_SYSCALL_64_fastpath+0x23/0xc6

[ 131.709013] [< inline >] check_prev_add
kernel/locking/lockdep.c:1828
[ 131.709013] [<ffffffff8156309b>]
check_prevs_add+0xaab/0x1c20 kernel/locking/lockdep.c:1938
[ 131.709013] [< inline >] validate_chain
kernel/locking/lockdep.c:2265
[ 131.709013] [<ffffffff81569576>]
__lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[ 131.709013] [<ffffffff8156b672>] lock_acquire+0x2a2/0x790
kernel/locking/lockdep.c:3749
[ 131.709013] [< inline >] __mutex_lock_common
kernel/locking/mutex.c:521
[ 131.709013] [<ffffffff88195bcf>]
mutex_lock_nested+0x23f/0xf20 kernel/locking/mutex.c:621
[ 131.709013] [< inline >] pipe_lock_nested fs/pipe.c:66
[ 131.709013] [<ffffffff81a8ea4b>] pipe_lock+0x5b/0x70 fs/pipe.c:74
[ 131.709013] [<ffffffff81b451f7>]
iter_file_splice_write+0x267/0xfa0 fs/splice.c:717
[ 131.709013] [< inline >] do_splice_from fs/splice.c:869
[ 131.709013] [< inline >] do_splice fs/splice.c:1160
[ 131.709013] [< inline >] SYSC_splice fs/splice.c:1410
[ 131.709013] [<ffffffff81b473c7>] SyS_splice+0x7d7/0x16a0
fs/splice.c:1393
[ 131.709013] [<ffffffff881a5f85>] entry_SYSCALL_64_fastpath+0x23/0xc6

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(sb_writers#5);
lock(&u->bindlock);
lock(sb_writers#5);
lock(&pipe->mutex/1);

*** DEADLOCK ***

1 lock held by syz-executor3/24365:
#0: (sb_writers#5){.+.+.+}, at: [< inline >]
file_start_write include/linux/fs.h:2592
#0: (sb_writers#5){.+.+.+}, at: [< inline >] do_splice
fs/splice.c:1159
#0: (sb_writers#5){.+.+.+}, at: [< inline >] SYSC_splice
fs/splice.c:1410
#0: (sb_writers#5){.+.+.+}, at: [<ffffffff81b47d9f>]
SyS_splice+0x11af/0x16a0 fs/splice.c:1393

stack backtrace:
CPU: 2 PID: 24365 Comm: syz-executor3 Not tainted 4.9.0-rc8+ #77
Hardware name: Google Google/Google, BIOS Google 01/01/2011
ffff8800597b6af8 ffffffff834c44f9 ffffffff00000002 1ffff1000b2f6cf2
ffffed000b2f6cea 0000000041b58ab3 ffffffff895816f0 ffffffff834c420b
0000000041b58ab3 ffffffff894dbca8 ffffffff8155c780 ffff8800597b6878
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffff834c44f9>] dump_stack+0x2ee/0x3f5 lib/dump_stack.c:51
[<ffffffff81560cb0>] print_circular_bug+0x310/0x3c0
kernel/locking/lockdep.c:1202
[< inline >] check_prev_add kernel/locking/lockdep.c:1828
[<ffffffff8156309b>] check_prevs_add+0xaab/0x1c20 kernel/locking/lockdep.c:1938
[< inline >] validate_chain kernel/locking/lockdep.c:2265
[<ffffffff81569576>] __lock_acquire+0x2156/0x3380 kernel/locking/lockdep.c:3338
[<ffffffff8156b672>] lock_acquire+0x2a2/0x790 kernel/locking/lockdep.c:3749
[< inline >] __mutex_lock_common kernel/locking/mutex.c:521
[<ffffffff88195bcf>] mutex_lock_nested+0x23f/0xf20 kernel/locking/mutex.c:621
[< inline >] pipe_lock_nested fs/pipe.c:66
[<ffffffff81a8ea4b>] pipe_lock+0x5b/0x70 fs/pipe.c:74
[<ffffffff81b451f7>] iter_file_splice_write+0x267/0xfa0 fs/splice.c:717

Cong Wang

unread,
Dec 8, 2016, 7:08:48 PM12/8/16
to Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Thu, Dec 8, 2016 at 8:30 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> Chain exists of:
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(sb_writers#5);
> lock(&u->bindlock);
> lock(sb_writers#5);
> lock(&pipe->mutex/1);

This looks false positive, probably just needs lockdep_set_class()
to set keys for pipe->mutex and unix->bindlock.

Al Viro

unread,
Dec 8, 2016, 8:32:16 PM12/8/16
to Cong Wang, Dmitry Vyukov, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
I'm afraid that it's not a false positive at all.

Preparations:
* create an AF_UNIX socket.
* set SOCK_PASSCRED on it.
* create a pipe.

Child 1: splice from pipe to socket; locks pipe and proceeds down towards
unix_dgram_sendmsg().

Child 2: splice from pipe to /mnt/foo/bar; requests write access to /mnt
and blocks on attempt to lock the pipe already locked by (1).

Child 3: freeze /mnt; blocks until (2) is done

Child 4: bind() the socket to /mnt/barf; grabs ->bindlock on the socket and
proceeds to create /mnt/barf, which blocks due to fairness of freezer (no
extra write accesses to something that is in process of being frozen).

_Now_ (1) gets around to unix_dgram_sendmsg(). We still have NULL u->addr,
since bind() has not gotten through yet. We also have SOCK_PASSCRED set,
so we attempt autobind; it blocks on the ->bindlock, which won't be
released until bind() is done (at which point we'll see non-NULL u->addr
and bugger off from autobind), but bind() won't succeed until /mnt
goes through the freeze-thaw cycle, which won't happen until (2) finishes,
which won't happen until (1) unlocks the pipe. Deadlock.

Granted, ->bindlock is taken interruptibly, so it's not that much of
a problem (you can kill the damn thing), but you would need to intervene
and kill it.

Why do we do autobind there, anyway, and why is it conditional on
SOCK_PASSCRED? Note that e.g. for SOCK_STREAM we can bloody well get
to sending stuff without autobind ever done - just use socketpair()
to create that sucker and we won't be going through the connect()
at all.

Cong Wang

unread,
Dec 9, 2016, 1:32:21 AM12/9/16
to Al Viro, Dmitry Vyukov, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Thu, Dec 8, 2016 at 5:32 PM, Al Viro <vi...@zeniv.linux.org.uk> wrote:
> On Thu, Dec 08, 2016 at 04:08:27PM -0800, Cong Wang wrote:
>> On Thu, Dec 8, 2016 at 8:30 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>> > Chain exists of:
>> > Possible unsafe locking scenario:
>> >
>> > CPU0 CPU1
>> > ---- ----
>> > lock(sb_writers#5);
>> > lock(&u->bindlock);
>> > lock(sb_writers#5);
>> > lock(&pipe->mutex/1);
>>
>> This looks false positive, probably just needs lockdep_set_class()
>> to set keys for pipe->mutex and unix->bindlock.
>
> I'm afraid that it's not a false positive at all.

Right, I was totally misled by the scenario output of lockdep, the stack
traces actually are much more reasonable.

The deadlock scenario is easy actually, comparing with the netlink one
which has 4 locks involved, it is:

unix_bind() path:
u->bindlock ==> sb_writer

do_splice() path:
sb_writer ==> pipe->mutex ==> u->bindlock

*** DEADLOCK ***

>
> Why do we do autobind there, anyway, and why is it conditional on
> SOCK_PASSCRED? Note that e.g. for SOCK_STREAM we can bloody well get
> to sending stuff without autobind ever done - just use socketpair()
> to create that sucker and we won't be going through the connect()
> at all.

In the case Dmitry reported, unix_dgram_sendmsg() calls unix_autobind(),
not SOCK_STREAM.

I guess some lock, perhaps the u->bindlock could be dropped before
acquiring the next one (sb_writer), but I need to double check.

Al Viro

unread,
Dec 9, 2016, 1:41:50 AM12/9/16
to Cong Wang, Dmitry Vyukov, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Thu, Dec 08, 2016 at 10:32:00PM -0800, Cong Wang wrote:

> > Why do we do autobind there, anyway, and why is it conditional on
> > SOCK_PASSCRED? Note that e.g. for SOCK_STREAM we can bloody well get
> > to sending stuff without autobind ever done - just use socketpair()
> > to create that sucker and we won't be going through the connect()
> > at all.
>
> In the case Dmitry reported, unix_dgram_sendmsg() calls unix_autobind(),
> not SOCK_STREAM.

Yes, I've noticed. What I'm asking is what in there needs autobind triggered
on sendmsg and why doesn't the same need affect the SOCK_STREAM case?

> I guess some lock, perhaps the u->bindlock could be dropped before
> acquiring the next one (sb_writer), but I need to double check.

Bad idea, IMO - do you *want* autobind being able to come through while
bind(2) is busy with mknod?

Dmitry Vyukov

unread,
Jan 16, 2017, 4:33:01 AM1/16/17
to Al Viro, Cong Wang, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
Ping. This is still happening on HEAD.


[ INFO: possible circular locking dependency detected ]
4.9.0 #1 Not tainted
-------------------------------------------------------
syz-executor6/25491 is trying to acquire lock:
(&u->bindlock){+.+.+.}, at: [<ffffffff83962315>]
unix_autobind.isra.28+0xc5/0x880 net/unix/af_unix.c:852
but task is already holding lock:
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff81a45ac6>] pipe_lock_nested
fs/pipe.c:66 [inline]
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff81a45ac6>]
pipe_lock+0x56/0x70 fs/pipe.c:74
which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

[ 836.500536] [<ffffffff8156f989>] validate_chain
kernel/locking/lockdep.c:2265 [inline]
[ 836.500536] [<ffffffff8156f989>]
__lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338
[ 836.508456] [<ffffffff81571b11>] lock_acquire+0x2a1/0x630
kernel/locking/lockdep.c:3753
[ 836.516117] [<ffffffff8435f9be>] __mutex_lock_common
kernel/locking/mutex.c:521 [inline]
[ 836.516117] [<ffffffff8435f9be>]
mutex_lock_nested+0x24e/0xff0 kernel/locking/mutex.c:621
[ 836.524139] [<ffffffff81a45ac6>] pipe_lock_nested
fs/pipe.c:66 [inline]
[ 836.524139] [<ffffffff81a45ac6>] pipe_lock+0x56/0x70 fs/pipe.c:74
[ 836.531287] [<ffffffff81af63d2>]
iter_file_splice_write+0x262/0xf80 fs/splice.c:717
[ 836.539720] [<ffffffff81af84e0>] do_splice_from
fs/splice.c:869 [inline]
[ 836.539720] [<ffffffff81af84e0>] do_splice fs/splice.c:1160 [inline]
[ 836.539720] [<ffffffff81af84e0>] SYSC_splice fs/splice.c:1410 [inline]
[ 836.539720] [<ffffffff81af84e0>] SyS_splice+0x7c0/0x1690
fs/splice.c:1393
[ 836.547273] [<ffffffff84370981>] entry_SYSCALL_64_fastpath+0x1f/0xc2

[ 836.560730] [<ffffffff8156f989>] validate_chain
kernel/locking/lockdep.c:2265 [inline]
[ 836.560730] [<ffffffff8156f989>]
__lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338
[ 836.568655] [<ffffffff81571b11>] lock_acquire+0x2a1/0x630
kernel/locking/lockdep.c:3753
[ 836.576230] [<ffffffff81a326ca>]
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35
[inline]
[ 836.576230] [<ffffffff81a326ca>] percpu_down_read
include/linux/percpu-rwsem.h:58 [inline]
[ 836.576230] [<ffffffff81a326ca>]
__sb_start_write+0x19a/0x2b0 fs/super.c:1252
[ 836.584168] [<ffffffff81ab1edf>] sb_start_write
include/linux/fs.h:1554 [inline]
[ 836.584168] [<ffffffff81ab1edf>] mnt_want_write+0x3f/0xb0
fs/namespace.c:389
[ 836.591744] [<ffffffff81a67581>] filename_create+0x151/0x610
fs/namei.c:3598
[ 836.599574] [<ffffffff81a67a73>] kern_path_create+0x33/0x40
fs/namei.c:3644
[ 836.607328] [<ffffffff83966683>] unix_mknod
net/unix/af_unix.c:967 [inline]
[ 836.607328] [<ffffffff83966683>] unix_bind+0x4c3/0xe00
net/unix/af_unix.c:1035
[ 836.614634] [<ffffffff834f047e>] SYSC_bind+0x20e/0x4a0
net/socket.c:1382
[ 836.621950] [<ffffffff834f3d84>] SyS_bind+0x24/0x30 net/socket.c:1368
[ 836.629015] [<ffffffff84370981>] entry_SYSCALL_64_fastpath+0x1f/0xc2

[ 836.642405] [<ffffffff815694cd>] check_prev_add
kernel/locking/lockdep.c:1828 [inline]
[ 836.642405] [<ffffffff815694cd>]
check_prevs_add+0xa8d/0x1c00 kernel/locking/lockdep.c:1938
[ 836.650348] [<ffffffff8156f989>] validate_chain
kernel/locking/lockdep.c:2265 [inline]
[ 836.650348] [<ffffffff8156f989>]
__lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338
[ 836.658315] [<ffffffff81571b11>] lock_acquire+0x2a1/0x630
kernel/locking/lockdep.c:3753
[ 836.665928] [<ffffffff84361ce1>] __mutex_lock_common
kernel/locking/mutex.c:521 [inline]
[ 836.665928] [<ffffffff84361ce1>]
mutex_lock_interruptible_nested+0x2e1/0x12a0
kernel/locking/mutex.c:650
[ 836.675287] [<ffffffff83962315>]
unix_autobind.isra.28+0xc5/0x880 net/unix/af_unix.c:852
[ 836.683571] [<ffffffff8396cdfc>]
unix_dgram_sendmsg+0x104c/0x1720 net/unix/af_unix.c:1667
[ 836.691870] [<ffffffff8396d5c3>]
unix_seqpacket_sendmsg+0xf3/0x160 net/unix/af_unix.c:2071
[ 836.700261] [<ffffffff834efaaa>] sock_sendmsg_nosec
net/socket.c:621 [inline]
[ 836.700261] [<ffffffff834efaaa>] sock_sendmsg+0xca/0x110
net/socket.c:631
[ 836.707758] [<ffffffff834f0137>] kernel_sendmsg+0x47/0x60
net/socket.c:639
[ 836.715327] [<ffffffff834faca6>]
sock_no_sendpage+0x216/0x300 net/core/sock.c:2321
[ 836.723278] [<ffffffff834ee5e0>] kernel_sendpage+0x90/0xe0
net/socket.c:3289
[ 836.730944] [<ffffffff834ee6bc>] sock_sendpage+0x8c/0xc0
net/socket.c:775
[ 836.738421] [<ffffffff81af011d>]
pipe_to_sendpage+0x29d/0x3e0 fs/splice.c:469
[ 836.746374] [<ffffffff81af4168>] splice_from_pipe_feed
fs/splice.c:520 [inline]
[ 836.746374] [<ffffffff81af4168>]
__splice_from_pipe+0x328/0x760 fs/splice.c:644
[ 836.754487] [<ffffffff81af77a7>]
splice_from_pipe+0x1d7/0x2f0 fs/splice.c:679
[ 836.762451] [<ffffffff81af7900>]
generic_splice_sendpage+0x40/0x50 fs/splice.c:850
[ 836.770826] [<ffffffff81af84e0>] do_splice_from
fs/splice.c:869 [inline]
[ 836.770826] [<ffffffff81af84e0>] do_splice fs/splice.c:1160 [inline]
[ 836.770826] [<ffffffff81af84e0>] SYSC_splice fs/splice.c:1410 [inline]
[ 836.770826] [<ffffffff81af84e0>] SyS_splice+0x7c0/0x1690
fs/splice.c:1393
[ 836.778307] [<ffffffff84370981>] entry_SYSCALL_64_fastpath+0x1f/0xc2

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&pipe->mutex/1);
lock(sb_writers#5);
lock(&pipe->mutex/1);
lock(&u->bindlock);

*** DEADLOCK ***

1 lock held by syz-executor6/25491:
#0: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff81a45ac6>]
pipe_lock_nested fs/pipe.c:66 [inline]
#0: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff81a45ac6>]
pipe_lock+0x56/0x70 fs/pipe.c:74

stack backtrace:
CPU: 0 PID: 25491 Comm: syz-executor6 Not tainted 4.9.0 #1
Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 01/01/2011
ffff8801cacc6248 ffffffff8234654f ffffffff00000000 1ffff10039598bdc
ffffed0039598bd4 0000000041b58ab3 ffffffff84b37a60 ffffffff82346261
0000000000000000 0000000000000000 0000000000000000 0000000000000000
Call Trace:
[<ffffffff8234654f>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff8234654f>] dump_stack+0x2ee/0x3ef lib/dump_stack.c:51
[<ffffffff81567147>] print_circular_bug+0x307/0x3b0
kernel/locking/lockdep.c:1202
[<ffffffff815694cd>] check_prev_add kernel/locking/lockdep.c:1828 [inline]
[<ffffffff815694cd>] check_prevs_add+0xa8d/0x1c00 kernel/locking/lockdep.c:1938
[<ffffffff8156f989>] validate_chain kernel/locking/lockdep.c:2265 [inline]
[<ffffffff8156f989>] __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3338
[<ffffffff81571b11>] lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3753
[<ffffffff84361ce1>] __mutex_lock_common kernel/locking/mutex.c:521 [inline]
[<ffffffff84361ce1>] mutex_lock_interruptible_nested+0x2e1/0x12a0
kernel/locking/mutex.c:650
[<ffffffff83962315>] unix_autobind.isra.28+0xc5/0x880 net/unix/af_unix.c:852
[<ffffffff8396cdfc>] unix_dgram_sendmsg+0x104c/0x1720 net/unix/af_unix.c:1667
[<ffffffff8396d5c3>] unix_seqpacket_sendmsg+0xf3/0x160 net/unix/af_unix.c:2071
[<ffffffff834efaaa>] sock_sendmsg_nosec net/socket.c:621 [inline]
[<ffffffff834efaaa>] sock_sendmsg+0xca/0x110 net/socket.c:631
[<ffffffff834f0137>] kernel_sendmsg+0x47/0x60 net/socket.c:639
[<ffffffff834faca6>] sock_no_sendpage+0x216/0x300 net/core/sock.c:2321
[<ffffffff834ee5e0>] kernel_sendpage+0x90/0xe0 net/socket.c:3289
[<ffffffff834ee6bc>] sock_sendpage+0x8c/0xc0 net/socket.c:775
[<ffffffff81af011d>] pipe_to_sendpage+0x29d/0x3e0 fs/splice.c:469
[<ffffffff81af4168>] splice_from_pipe_feed fs/splice.c:520 [inline]
[<ffffffff81af4168>] __splice_from_pipe+0x328/0x760 fs/splice.c:644
[<ffffffff81af77a7>] splice_from_pipe+0x1d7/0x2f0 fs/splice.c:679
[<ffffffff81af7900>] generic_splice_sendpage+0x40/0x50 fs/splice.c:850
[<ffffffff81af84e0>] do_splice_from fs/splice.c:869 [inline]
[<ffffffff81af84e0>] do_splice fs/splice.c:1160 [inline]
[<ffffffff81af84e0>] SYSC_splice fs/splice.c:1410 [inline]
[<ffffffff81af84e0>] SyS_splice+0x7c0/0x1690 fs/splice.c:1393
[<ffffffff84370981>] entry_SYSCALL_64_fastpath+0x1f/0xc2
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
FAULT_FLAG_ALLOW_RETRY missing 30
CPU: 1 PID: 25716 Comm: syz-executor3 Not tainted 4.9.0 #1
Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 01/01/2011
ffff8801b6a274a8 ffffffff8234654f ffffffff00000001 1ffff10036d44e28
ffffed0036d44e20 0000000041b58ab3 ffffffff84b37a60 ffffffff82346261
0000000000000000 ffff8801dc122980 ffff8801a36c2800 1ffff10036d44e2a
Call Trace:
[<ffffffff8234654f>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff8234654f>] dump_stack+0x2ee/0x3ef lib/dump_stack.c:51
[<ffffffff81b6325d>] handle_userfault+0x115d/0x1fc0 fs/userfaultfd.c:381
[<ffffffff8192f792>] do_anonymous_page mm/memory.c:2800 [inline]
[<ffffffff8192f792>] handle_pte_fault mm/memory.c:3560 [inline]
[<ffffffff8192f792>] __handle_mm_fault mm/memory.c:3652 [inline]
[<ffffffff8192f792>] handle_mm_fault+0x24f2/0x2890 mm/memory.c:3689
[<ffffffff81323df6>] __do_page_fault+0x4f6/0xb60 arch/x86/mm/fault.c:1397
[<ffffffff813244b4>] do_page_fault+0x54/0x70 arch/x86/mm/fault.c:1460
[<ffffffff84371d38>] page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1012
[<ffffffff81a65dfe>] getname_flags+0x10e/0x580 fs/namei.c:148
[<ffffffff81a66f1d>] user_path_at_empty+0x2d/0x50 fs/namei.c:2556
[<ffffffff81a385e1>] user_path_at include/linux/namei.h:55 [inline]
[<ffffffff81a385e1>] vfs_fstatat+0xf1/0x1a0 fs/stat.c:106
[<ffffffff81a3a12b>] vfs_lstat fs/stat.c:129 [inline]
[<ffffffff81a3a12b>] SYSC_newlstat+0xab/0x140 fs/stat.c:283
[<ffffffff81a3a51d>] SyS_newlstat+0x1d/0x30 fs/stat.c:277
[<ffffffff84370981>] entry_SYSCALL_64_fastpath+0x1f/0xc2
FAULT_FLAG_ALLOW_RETRY missing 30
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
CPU: 1 PID: 25716 Comm: syz-executor3 Not tainted 4.9.0 #1
Hardware name: Google Google Compute Engine/Google Compute Engine,
BIOS Google 01/01/2011
ffff8801b6a27360 ffffffff8234654f ffffffff00000001 1ffff10036d44dff
ffffed0036d44df7 0000000041b58ab3 ffffffff84b37a60 ffffffff82346261
0000000000000082 ffff8801dc122980 ffff8801da622540 1ffff10036d44e01
Call Trace:
[<ffffffff8234654f>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff8234654f>] dump_stack+0x2ee/0x3ef lib/dump_stack.c:51
[<ffffffff81b6325d>] handle_userfault+0x115d/0x1fc0 fs/userfaultfd.c:381
[<ffffffff8192f792>] do_anonymous_page mm/memory.c:2800 [inline]
[<ffffffff8192f792>] handle_pte_fault mm/memory.c:3560 [inline]
[<ffffffff8192f792>] __handle_mm_fault mm/memory.c:3652 [inline]
[<ffffffff8192f792>] handle_mm_fault+0x24f2/0x2890 mm/memory.c:3689
[<ffffffff81323df6>] __do_page_fault+0x4f6/0xb60 arch/x86/mm/fault.c:1397
[<ffffffff81324611>] trace_do_page_fault+0x141/0x6c0 arch/x86/mm/fault.c:1490
[<ffffffff84371d08>] trace_page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1012
[<ffffffff81a65dfe>] getname_flags+0x10e/0x580 fs/namei.c:148
[<ffffffff81a66f1d>] user_path_at_empty+0x2d/0x50 fs/namei.c:2556
[<ffffffff81a385e1>] user_path_at include/linux/namei.h:55 [inline]
[<ffffffff81a385e1>] vfs_fstatat+0xf1/0x1a0 fs/stat.c:106
[<ffffffff81a3a12b>] vfs_lstat fs/stat.c:129 [inline]
[<ffffffff81a3a12b>] SYSC_newlstat+0xab/0x140 fs/stat.c:283
[<ffffffff81a3a51d>] SyS_newlstat+0x1d/0x30 fs/stat.c:277
[<ffffffff84370981>] entry_SYSCALL_64_fastpath+0x1f/0xc2

Eric W. Biederman

unread,
Jan 17, 2017, 3:11:20 AM1/17/17
to Al Viro, Cong Wang, Dmitry Vyukov, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
Al Viro <vi...@ZenIV.linux.org.uk> writes:

> On Thu, Dec 08, 2016 at 10:32:00PM -0800, Cong Wang wrote:
>
>> > Why do we do autobind there, anyway, and why is it conditional on
>> > SOCK_PASSCRED? Note that e.g. for SOCK_STREAM we can bloody well get
>> > to sending stuff without autobind ever done - just use socketpair()
>> > to create that sucker and we won't be going through the connect()
>> > at all.
>>
>> In the case Dmitry reported, unix_dgram_sendmsg() calls unix_autobind(),
>> not SOCK_STREAM.
>
> Yes, I've noticed. What I'm asking is what in there needs autobind triggered
> on sendmsg and why doesn't the same need affect the SOCK_STREAM case?

With respect to the conditionality on SOCK_PASSCRED those are the linux
semantics. Semantically that is the way the code has behaved since
2.1.15 when support for passing credentials was added to the code.
So I presume someone thought it was a good idea to have a name for
a socket that is sending credentials to another socket. It certainly
seems reasonable at first glance.

With socketpair the only path that doesn't enforce this with
SOCK_STREAM and SOCK_PASSCRED that is either an oversight or a don't
care because we already know who is at the other end.

I can imagine two possible fixes:
1) Declare that splice is non-sense in the presence of SOCK_PASSCRED.
2) Someone adds a preparation operation that can be called on
af_unix sockets that will ensure the autobind happens before
any problematic locks are taken.

Eric

Cong Wang

unread,
Jan 17, 2017, 4:22:10 PM1/17/17
to Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Mon, Jan 16, 2017 at 1:32 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> On Fri, Dec 9, 2016 at 7:41 AM, Al Viro <vi...@zeniv.linux.org.uk> wrote:
>> On Thu, Dec 08, 2016 at 10:32:00PM -0800, Cong Wang wrote:
>>
>>> > Why do we do autobind there, anyway, and why is it conditional on
>>> > SOCK_PASSCRED? Note that e.g. for SOCK_STREAM we can bloody well get
>>> > to sending stuff without autobind ever done - just use socketpair()
>>> > to create that sucker and we won't be going through the connect()
>>> > at all.
>>>
>>> In the case Dmitry reported, unix_dgram_sendmsg() calls unix_autobind(),
>>> not SOCK_STREAM.
>>
>> Yes, I've noticed. What I'm asking is what in there needs autobind triggered
>> on sendmsg and why doesn't the same need affect the SOCK_STREAM case?
>>
>>> I guess some lock, perhaps the u->bindlock could be dropped before
>>> acquiring the next one (sb_writer), but I need to double check.
>>
>> Bad idea, IMO - do you *want* autobind being able to come through while
>> bind(2) is busy with mknod?
>
>
> Ping. This is still happening on HEAD.
>

Thanks for your reminder. Mind to give the attached patch (compile only)
a try? I take another approach to fix this deadlock, which moves the
unix_mknod() out of unix->bindlock. Not sure if there is any unexpected
impact with this way.

Thanks.
unix.diff

Dmitry Vyukov

unread,
Jan 18, 2017, 4:18:15 AM1/18/17
to Cong Wang, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
I instantly hit:

general protection fault: 0000 [#1] SMP KASAN
Dumping ftrace buffer:
(ftrace buffer empty)
Modules linked in:
CPU: 0 PID: 8930 Comm: syz-executor1 Not tainted 4.10.0-rc4+ #177
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff88003c908840 task.stack: ffff88003a9a0000
RIP: 0010:__lock_acquire+0xb3a/0x3430 kernel/locking/lockdep.c:3224
RSP: 0018:ffff88003a9a7218 EFLAGS: 00010006
RAX: dffffc0000000000 RBX: dffffc0000000000 RCX: 0000000000000000
RDX: 0000000000000003 RSI: 0000000000000000 RDI: 1ffff10007534e9d
RBP: ffff88003a9a7750 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000018 R11: 0000000000000000 R12: ffff88003c908840
R13: 0000000000000001 R14: ffffffff863504a0 R15: 0000000000000001
FS: 00007f4f8eb5d700(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020b1d000 CR3: 000000003bde9000 CR4: 00000000000006f0
Call Trace:
lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3753
__raw_spin_lock include/linux/spinlock_api_smp.h:144 [inline]
_raw_spin_lock+0x33/0x50 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:302 [inline]
list_lru_add+0x10b/0x340 mm/list_lru.c:115
d_lru_add fs/dcache.c:366 [inline]
dentry_lru_add fs/dcache.c:421 [inline]
dput.part.27+0x659/0x7c0 fs/dcache.c:784
dput+0x1f/0x30 fs/dcache.c:753
path_put+0x31/0x70 fs/namei.c:500
unix_bind+0x424/0xea0 net/unix/af_unix.c:1072
SYSC_bind+0x20e/0x4a0 net/socket.c:1413
SyS_bind+0x24/0x30 net/socket.c:1399
entry_SYSCALL_64_fastpath+0x1f/0xc2
RIP: 0033:0x4454b9
RSP: 002b:00007f4f8eb5cb58 EFLAGS: 00000292 ORIG_RAX: 0000000000000031
RAX: ffffffffffffffda RBX: 000000000000001d RCX: 00000000004454b9
RDX: 0000000000000008 RSI: 000000002002cff8 RDI: 000000000000001d
RBP: 00000000006dd230 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000292 R12: 0000000000700000
R13: 00007f4f8f2de9d0 R14: 00007f4f8f2dfc40 R15: 0000000000000000
Code: e9 03 f3 48 ab 48 81 c4 10 05 00 00 44 89 e8 5b 41 5c 41 5d 41
5e 41 5f 5d c3 4c 89 d2 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03 <80>
3c 02 00 0f 85 9e 26 00 00 49 81 3a e0 be 6b 85 41 bf 00 00
RIP: __lock_acquire+0xb3a/0x3430 kernel/locking/lockdep.c:3224 RSP:
ffff88003a9a7218
---[ end trace 78951d69744a2fe1 ]---
Kernel panic - not syncing: Fatal exception
Dumping ftrace buffer:
(ftrace buffer empty)
Kernel Offset: disabled


and:


BUG: KASAN: use-after-free in list_lru_add+0x2fd/0x340
mm/list_lru.c:112 at addr ffff88006b301340
Read of size 8 by task syz-executor0/7116
CPU: 2 PID: 7116 Comm: syz-executor0 Not tainted 4.10.0-rc4+ #177
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:15 [inline]
dump_stack+0x2ee/0x3ef lib/dump_stack.c:51
kasan_object_err+0x1c/0x70 mm/kasan/report.c:165
print_address_description mm/kasan/report.c:203 [inline]
kasan_report_error mm/kasan/report.c:287 [inline]
kasan_report+0x1b6/0x460 mm/kasan/report.c:307
__asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:333
list_lru_add+0x2fd/0x340 mm/list_lru.c:112
d_lru_add fs/dcache.c:366 [inline]
dentry_lru_add fs/dcache.c:421 [inline]
dput.part.27+0x659/0x7c0 fs/dcache.c:784
dput+0x1f/0x30 fs/dcache.c:753
path_put+0x31/0x70 fs/namei.c:500
unix_bind+0x424/0xea0 net/unix/af_unix.c:1072
SYSC_bind+0x20e/0x4a0 net/socket.c:1413
SyS_bind+0x24/0x30 net/socket.c:1399
entry_SYSCALL_64_fastpath+0x1f/0xc2
RIP: 0033:0x4454b9
RSP: 002b:00007f1b034ebb58 EFLAGS: 00000292 ORIG_RAX: 0000000000000031
RAX: ffffffffffffffda RBX: 0000000000000016 RCX: 00000000004454b9
RDX: 0000000000000008 RSI: 000000002002eff8 RDI: 0000000000000016
RBP: 00000000006dd230 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000292 R12: 0000000000700000
R13: 00007f1b03c6d458 R14: 00007f1b03c6e5e8 R15: 0000000000000000
Object at ffff88006b301300, in cache vm_area_struct size: 192
Allocated:
PID = 1391

[<ffffffff812b2686>] save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57

[<ffffffff81a0e713>] save_stack+0x43/0xd0 mm/kasan/kasan.c:502

[<ffffffff81a0e9da>] set_track mm/kasan/kasan.c:514 [inline]
[<ffffffff81a0e9da>] kasan_kmalloc+0xaa/0xd0 mm/kasan/kasan.c:605

[<ffffffff81a0efd2>] kasan_slab_alloc+0x12/0x20 mm/kasan/kasan.c:544

[<ffffffff81a0a5e2>] kmem_cache_alloc+0x102/0x680 mm/slab.c:3563

[<ffffffff8144093b>] dup_mmap kernel/fork.c:609 [inline]
[<ffffffff8144093b>] dup_mm kernel/fork.c:1145 [inline]
[<ffffffff8144093b>] copy_mm kernel/fork.c:1199 [inline]
[<ffffffff8144093b>] copy_process.part.42+0x503b/0x5fd0 kernel/fork.c:1669

[<ffffffff81441e10>] copy_process kernel/fork.c:1494 [inline]
[<ffffffff81441e10>] _do_fork+0x200/0xff0 kernel/fork.c:1950

[<ffffffff81442cd7>] SYSC_clone kernel/fork.c:2060 [inline]
[<ffffffff81442cd7>] SyS_clone+0x37/0x50 kernel/fork.c:2054

[<ffffffff81009798>] do_syscall_64+0x2e8/0x930 arch/x86/entry/common.c:280

[<ffffffff841cadc9>] return_from_SYSCALL_64+0x0/0x7a
Freed:
PID = 5275

[<ffffffff812b2686>] save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57

[<ffffffff81a0e713>] save_stack+0x43/0xd0 mm/kasan/kasan.c:502

[<ffffffff81a0f04f>] set_track mm/kasan/kasan.c:514 [inline]
[<ffffffff81a0f04f>] kasan_slab_free+0x6f/0xb0 mm/kasan/kasan.c:578

[<ffffffff81a0c3b1>] __cache_free mm/slab.c:3505 [inline]
[<ffffffff81a0c3b1>] kmem_cache_free+0x71/0x240 mm/slab.c:3765

[<ffffffff81976992>] remove_vma+0x162/0x1b0 mm/mmap.c:175

[<ffffffff8197f72f>] exit_mmap+0x2ef/0x490 mm/mmap.c:2952

[<ffffffff814390bb>] __mmput kernel/fork.c:873 [inline]
[<ffffffff814390bb>] mmput+0x22b/0x6e0 kernel/fork.c:895

[<ffffffff81453a3f>] exit_mm kernel/exit.c:521 [inline]
[<ffffffff81453a3f>] do_exit+0x9cf/0x28a0 kernel/exit.c:826

[<ffffffff8145a369>] do_group_exit+0x149/0x420 kernel/exit.c:943

[<ffffffff81489630>] get_signal+0x7e0/0x1820 kernel/signal.c:2313

[<ffffffff8127ca92>] do_signal+0xd2/0x2190 arch/x86/kernel/signal.c:807

[<ffffffff81007900>] exit_to_usermode_loop+0x200/0x2a0
arch/x86/entry/common.c:156

[<ffffffff81009413>] prepare_exit_to_usermode
arch/x86/entry/common.c:190 [inline]
[<ffffffff81009413>] syscall_return_slowpath+0x4d3/0x570
arch/x86/entry/common.c:259

[<ffffffff841cada2>] entry_SYSCALL_64_fastpath+0xc0/0xc2
Memory state around the buggy address:
ffff88006b301200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88006b301280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
>ffff88006b301300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff88006b301380: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
ffff88006b301400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================

Cong Wang

unread,
Jan 19, 2017, 11:57:57 PM1/19/17
to Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Wed, Jan 18, 2017 at 1:17 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> On Tue, Jan 17, 2017 at 10:21 PM, Cong Wang <xiyou.w...@gmail.com> wrote:
>> On Mon, Jan 16, 2017 at 1:32 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>>> On Fri, Dec 9, 2016 at 7:41 AM, Al Viro <vi...@zeniv.linux.org.uk> wrote:
>>>> On Thu, Dec 08, 2016 at 10:32:00PM -0800, Cong Wang wrote:
>>>>
>>>>> > Why do we do autobind there, anyway, and why is it conditional on
>>>>> > SOCK_PASSCRED? Note that e.g. for SOCK_STREAM we can bloody well get
>>>>> > to sending stuff without autobind ever done - just use socketpair()
>>>>> > to create that sucker and we won't be going through the connect()
>>>>> > at all.
>>>>>
>>>>> In the case Dmitry reported, unix_dgram_sendmsg() calls unix_autobind(),
>>>>> not SOCK_STREAM.
>>>>
>>>> Yes, I've noticed. What I'm asking is what in there needs autobind triggered
>>>> on sendmsg and why doesn't the same need affect the SOCK_STREAM case?
>>>>
>>>>> I guess some lock, perhaps the u->bindlock could be dropped before
>>>>> acquiring the next one (sb_writer), but I need to double check.
>>>>
>>>> Bad idea, IMO - do you *want* autobind being able to come through while
>>>> bind(2) is busy with mknod?
>>>
>>>
>>> Ping. This is still happening on HEAD.
>>>
>>
>> Thanks for your reminder. Mind to give the attached patch (compile only)
>> a try? I take another approach to fix this deadlock, which moves the
>> unix_mknod() out of unix->bindlock. Not sure if there is any unexpected
>> impact with this way.
>
>
> I instantly hit:
>

Oh, sorry about it, I forgot to initialize struct path...

Attached is the updated version, I just did a boot test, no crash at least. ;)

Thanks!
unix.diff

Dmitry Vyukov

unread,
Jan 20, 2017, 5:52:38 PM1/20/17
to Cong Wang, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Fri, Jan 20, 2017 at 5:57 AM, Cong Wang <xiyou.w...@gmail.com> wrote:
>>>>>> > Why do we do autobind there, anyway, and why is it conditional on
>>>>>> > SOCK_PASSCRED? Note that e.g. for SOCK_STREAM we can bloody well get
>>>>>> > to sending stuff without autobind ever done - just use socketpair()
>>>>>> > to create that sucker and we won't be going through the connect()
>>>>>> > at all.
>>>>>>
>>>>>> In the case Dmitry reported, unix_dgram_sendmsg() calls unix_autobind(),
>>>>>> not SOCK_STREAM.
>>>>>
>>>>> Yes, I've noticed. What I'm asking is what in there needs autobind triggered
>>>>> on sendmsg and why doesn't the same need affect the SOCK_STREAM case?
>>>>>
>>>>>> I guess some lock, perhaps the u->bindlock could be dropped before
>>>>>> acquiring the next one (sb_writer), but I need to double check.
>>>>>
>>>>> Bad idea, IMO - do you *want* autobind being able to come through while
>>>>> bind(2) is busy with mknod?
>>>>
>>>>
>>>> Ping. This is still happening on HEAD.
>>>>
>>>
>>> Thanks for your reminder. Mind to give the attached patch (compile only)
>>> a try? I take another approach to fix this deadlock, which moves the
>>> unix_mknod() out of unix->bindlock. Not sure if there is any unexpected
>>> impact with this way.
>>
>>
>> I instantly hit:
>>
>
> Oh, sorry about it, I forgot to initialize struct path...
>
> Attached is the updated version, I just did a boot test, no crash at least. ;)
>
> Thanks!

This works! I did not see the deadlock warning, nor any other related crashes.

Tested-by: Dmitry Vyukov <dvy...@google.com>

Cong Wang

unread,
Jan 23, 2017, 2:00:38 PM1/23/17
to Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Fri, Jan 20, 2017 at 2:52 PM, Dmitry Vyukov <dvy...@google.com> wrote:
>
> This works! I did not see the deadlock warning, nor any other related crashes.
>
> Tested-by: Dmitry Vyukov <dvy...@google.com>

Thanks for verifying it. I will send it out formally soon.

Mateusz Guzik

unread,
Jan 26, 2017, 6:29:27 PM1/26/17
to Cong Wang, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
I don't think this is the right approach.

Currently the file creation is potponed until unix_bind can no longer
fail otherwise. With it reordered, it may be someone races you with a
different path and now you are left with a file to clean up. Except it
is quite unclear for me if you can unlink it.

I don't have a good idea how to fix it. A somewhat typical approach
would introduce an intermediate state ("under construction") and drop
the lock between calling into unix_mknod.

In this particular case, perhaps you could repurpose gc_flags as a
general flags carrier and add a 'binding in process' flag to test.

Cong Wang

unread,
Jan 27, 2017, 12:11:28 AM1/27/17
to Mateusz Guzik, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
What races do you mean here? If you mean someone could get a
refcount of that file, it could happen no matter we have bindlock or not
since it is visible once created. The filesystem layer should take care of
the file refcount so all we need to do here is calling path_put() as in my
patch. Or if you mean two threads calling unix_bind() could race without
binlock, only one of them should succeed the other one just fails out.

Mateusz Guzik

unread,
Jan 27, 2017, 1:41:50 AM1/27/17
to Cong Wang, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
Two threads can race and one fails with EINVAL.

With your patch there is a new file created and it is unclear what to
do with it - leaving it as it is sounds like the last resort and
unlinking it sounds extremely fishy as it opens you to games played by
the user.

Cong Wang

unread,
Jan 31, 2017, 1:44:24 AM1/31/17
to Mateusz Guzik, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
But the file is created and visible to users too even without my patch,
the file is also put when the unix sock is released. So the only difference
my patch makes is bindlock is no longer taken during file creation, which
does not seem to be the cause of the problem you complain here.

Mind being more specific?

Mateusz Guzik

unread,
Jan 31, 2017, 1:14:18 PM1/31/17
to Cong Wang, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
Consider 2 threads which bind the same socket, but with different paths.

Currently exactly one file will get created, the one used to bind.

With your patch both threads can succeed creating their respective
files, but only one will manage to bind. The other one must error out,
but it already created a file it is unclear what to do with.

Cong Wang

unread,
Feb 6, 2017, 2:22:33 AM2/6/17
to Mateusz Guzik, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
In this case, it simply puts the path back:

err = -EINVAL;
if (u->addr)
goto out_up;
[...]

out_up:
mutex_unlock(&u->bindlock);
out_put:
if (err)
path_put(&path);
out:
return err;


Which is what unix_release_sock() does too:

if (path.dentry)
path_put(&path);

Mateusz Guzik

unread,
Feb 7, 2017, 9:20:43 AM2/7/17
to Cong Wang, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
Yes, but unix_release_sock is expected to leave the file behind.
Note I'm not claiming there is a leak, but that racing threads will be
able to trigger a condition where you create a file and fail to bind it.

What to do with the file now?

Untested, but likely a working solution would rework the code so that
e.g. a flag is set and the lock can be dropped.

Cong Wang

unread,
Feb 9, 2017, 8:38:17 PM2/9/17
to Mateusz Guzik, Dmitry Vyukov, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, netdev, Eric Dumazet, syzkaller
On Tue, Feb 7, 2017 at 6:20 AM, Mateusz Guzik <mgu...@redhat.com> wrote:
>
> Yes, but unix_release_sock is expected to leave the file behind.
> Note I'm not claiming there is a leak, but that racing threads will be
> able to trigger a condition where you create a file and fail to bind it.
>

Which is expected, right? No one guarantees the success of file
creation is the success of bind, the previous code does but it is not
part of API AFAIK. Should a sane user-space application check
the file creation for a successful bind() or just check its return value?

> What to do with the file now?
>

We just do what unix_release_sock() does, so why do you keep
asking the same question?

If you still complain about the race with user-space, think about the
same race in-between a successful bind() and close(), nothing is new.

kodamagul...@gmail.com

unread,
Jun 22, 2017, 1:49:05 PM6/22/17
to syzkaller, vi...@zeniv.linux.org.uk, linux-...@vger.kernel.org, linux-...@vger.kernel.org, da...@davemloft.net, rwei...@mobileactivedefense.com, han...@stressinduktion.org, xiyou.w...@gmail.com, net...@vger.kernel.org, edum...@google.com
I was getting below crash while running mp4.


44.531367] [<8001016c>] (handle_IRQ) from [<800085d4>] (gic_handle_irq+0x3c/0x6c)
[  844.539924]  1367]  r6:806cbf28 r5:806d3354 r4:fa21200c r3:000000c0
[   0] [  844.546514] [<80008598>] (gic_handle_irq) from [<804ae604>] (__irq_svc+0x44/0x78)
[   0] [  844.554972] Exception stack(0x806cbf28 to 0x806cbf70)
[   0] [  844.560893] bf20:                   00000000 00000000 0045dcd6 00000000 806ca000 806ca000
[   0] [  844.570068] bf40: 00000001 806d24f4 804b9268 8071c80c 806ca000 806cbf7c 806cbf70 806cbf70
[   0] [  844.579258] bf60: 800105d8 800105dc 600f0013 ffffffff
[  844.585172]  9258]  r7:806cbf5c r6:ffffffff r5:600f0013 r4:800105dc
[   0] [  844.591757] [<800105a8>] (arch_cpu_idle) from [<8008e024>] (cpu_startup_entry+0x208/0x268)
[   0] [  844.601041] [<8008de1c>] (cpu_startup_entry) from [<804a6580>] (rest_init+0x94/0x98)
[  844.609770]  1041]  r7:ffffffff
[   0] [  844.613066] [<804a64ec>] (rest_init) from [<8066fb3c>] (start_kernel+0x308/0x314)
[  844.621527]  3066]  r4:806d2fb0 r3:00000000
[   0] [  844.625918] [<8066f834>] (start_kernel) from [<80008074>] (0x80008074)
[   1] [  844.633380] CPU1: stopping
[   1] [  844.636857] CPU: 1 PID: 1815 Comm: amfmTimerMgr Tainted: G           O 3.14.30 #Rel_Elina_J6_Sample-B0_17123A
[  844.647859]  6857] Backtrace: 
[   1] [  844.651071] [<80013874>] (dump_backtrace) from [<80013ae0>] (show_stack+0x20/0x24)
[  844.659615]  1071]  r6:00000001 r5:806fd89c r4:00000000 r3:00000000
[   1] [  844.666203] [<80013ac0>] (show_stack) from [<804a8e68>] (dump_stack+0x84/0xc4)
[   1] [  844.674399] [<804a8de4>] (dump_stack) from [<80015640>] (handle_IPI+0x158/0x16c)
[  844.682765]  4399]  r5:00000005 r4:806c800c
[   1] [  844.687155] [<800154e8>] (handle_IPI) from [<80008600>] (gic_handle_irq+0x68/0x6c)
[  844.695700]  7155]  r8:8e03b600 r7:fa212000 r6:8b37fc00 r5:806d3354 r4:fa21200c r3:00000000
[   1] [  844.704469] [<80008598>] (gic_handle_irq) from [<804ae604>] (__irq_svc+0x44/0x78)
[   1] [  844.712922] Exception stack(0x8b37fc00 to 0x8b37fc48)
[   1] [  844.718837] fc00: 00000017 8e1bec00 00000056 00000000 000004d0 8d524540 000004d0 400f0013
[   1] [  844.728022] fc20: 8e03b600 803f0810 8b2b74c0 8b37fc7c 8b37fc48 8b37fc48 8013bd74 8013bd78
[   1] [  844.737199] fc40: 600f0013 ffffffff
[  844.741481]  7199]  r7:8b37fc34 r6:ffffffff r5:600f0013 r4:8013bd78
[   1] [  844.748081] [<8013bc74>] (kmem_cache_alloc) from [<803f0810>] (__alloc_skb+0x44/0x13c)
[  844.756987]  8081]  r10:8b2b74c0 r9:00000000 r8:8e03b600 r7:00000000 r6:000004d0 r5:00000040
[  844.765843]  8081]  r4:00000000
[   1] [  844.769134] [<803f07cc>] (__alloc_skb) from [<803ea214>] (sock_alloc_send_pskb+0xa8/0x3bc)
[  844.778405]  9134]  r9:00000000 r8:00000040 r7:80080698 r6:00000000 r5:8b37e000 r4:00000000
[   1] [  844.787183] [<803ea16c>] (sock_alloc_send_pskb) from [<804974d0>] (unix_dgram_sendmsg+0x150/0x5ac)
[  844.797181]  7183]  r10:00000000 r9:00000040 r8:8b2b74c0 r7:8d192040 r6:8b37fdd8 r5:8b37ff5c
[  844.806039]  7183]  r4:8b34b040
[   1] [  844.809334] [<80497380>] (unix_dgram_sendmsg) from [<80497978>] (unix_seqpacket_sendmsg+0x4c/0x88)
[  844.819337]  9334]  r10:000040c0 r9:00000000 r8:8d192040 r7:8b37fe60 r6:8b37fe60 r5:00000040
[  844.828192]  9334]  r4:8b37fda0
[   1] [  844.831482] [<8049792c>] (unix_seqpacket_sendmsg) from [<803e63b0>] (sock_sendmsg+0x98/0xbc)
[  844.840934]  1482]  r6:8b37fe60 r5:00000040 r4:8049792c r3:00000040
[   1] [  844.847520] [<803e6318>] (sock_sendmsg) from [<803e7a40>] (___sys_sendmsg.part.29+0x2d0/0x2e0)
[  844.857149]  7520]  r4:8b37ff5c
[   1] [  844.860442] [<803e7770>] (___sys_sendmsg.part.29) from [<803e8bd0>] (__sys_sendmsg+0x5c/0x8c)
[  844.869988]  0442]  r10:00000000 r9:8b37e000 r8:8000fa24 r7:00000128 r6:000040c0 r5:74ed7bc4
[  844.878851]  0442]  r4:8d192040
[   1] [  844.882141] [<803e8b74>] (__sys_sendmsg) from [<803e8c18>] (SyS_sendmsg+0x18/0x1c)
[  844.890681]  2141]  r6:00000010 r5:00000000 r4:0000003d
[   1] [  844.896166] [<803e8c00>] (SyS_sendmsg) from [<8000f7a0>] (ret_fast_syscall+0x0/0x48)
[  844.913803]  6166] Rebooting in 1 seconds..

Cong Wang

unread,
Jun 23, 2017, 12:31:18 PM6/23/17
to kodamagul...@gmail.com, syzkaller, Al Viro, linux-...@vger.kernel.org, LKML, David Miller, Rainer Weikusat, Hannes Frederic Sowa, Linux Kernel Network Developers, Eric Dumazet
Hi,

On Thu, Jun 22, 2017 at 10:49 AM, <kodamagul...@gmail.com> wrote:
> I was getting below crash while running mp4.

Are you sure your 3.14 kernel has my patch in this thread?
commit 0fb44559ffd67de8517098 is merged in 4.10.

Also, your crash is on unix_dgram_sendmsg() path, not
unix_bind().
Reply all
Reply to author
Forward
0 new messages