sound: deadlock between snd_rawmidi_kernel_open/snd_seq_port_connect

15 views
Skip to first unread message

Dmitry Vyukov

unread,
Jan 24, 2016, 4:44:54 AM1/24/16
to Jaroslav Kysela, Takashi Iwai, Jie Yang, Mark Brown, alsa-...@alsa-project.org, LKML, Alexander Potapenko, Sasha Levin, Kostya Serebryany, syzkaller
Hello,

While running syzkaller fuzzer I've got the following lockdep report:

======================================================
[ INFO: possible circular locking dependency detected ]
4.4.0+ #276 Not tainted
-------------------------------------------------------
syz-executor/21025 is trying to acquire lock:
(register_mutex#5){+.+.+.}, at: [<ffffffff84f889cb>]
snd_rawmidi_kernel_open+0x4b/0x260 sound/core/rawmidi.c:341

but task is already holding lock:
(&grp->list_mutex/1){+.+...}, at: [<ffffffff84fc4afa>]
snd_seq_port_connect+0x1ba/0x840 sound/core/seq/seq_ports.c:506

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&grp->list_mutex/1){+.+...}:
[<ffffffff8145ad8c>] lock_acquire+0x1dc/0x430
kernel/locking/lockdep.c:3585
[<ffffffff8144851a>] down_write_nested+0x4a/0xa0
kernel/locking/rwsem.c:149
[<ffffffff84fc4afa>] snd_seq_port_connect+0x1ba/0x840
sound/core/seq/seq_ports.c:506
[<ffffffff84fb6f54>] snd_seq_ioctl_subscribe_port+0x1c4/0x290
sound/core/seq/seq_clientmgr.c:1464
[<ffffffff84fb126d>] snd_seq_do_ioctl+0x19d/0x1c0
sound/core/seq/seq_clientmgr.c:2209
[<ffffffff84fb2c2b>] snd_seq_kernel_client_ctl+0xdb/0x170
sound/core/seq/seq_clientmgr.c:2423
[<ffffffff88413c20>] snd_seq_oss_create_client+0x253/0x2d5
sound/core/seq/oss/seq_oss_init.c:119
[<ffffffff8841393e>] alsa_seq_oss_init+0x1af/0x23e
sound/core/seq/oss/seq_oss.c:89
[<ffffffff81002259>] do_one_initcall+0x159/0x380 init/main.c:794
[< inline >] do_initcall_level init/main.c:859
[< inline >] do_initcalls init/main.c:867
[< inline >] do_basic_setup init/main.c:885
[<ffffffff88315c1a>] kernel_init_freeable+0x474/0x52d init/main.c:1010
[<ffffffff86312683>] kernel_init+0x13/0x150 init/main.c:936
[<ffffffff86336fef>] ret_from_fork+0x3f/0x70
arch/x86/entry/entry_64.S:468

-> #1 (&grp->list_mutex){++++.+}:
[<ffffffff8145ad8c>] lock_acquire+0x1dc/0x430
kernel/locking/lockdep.c:3585
[<ffffffff863323a7>] down_read+0x47/0x60 kernel/locking/rwsem.c:22
[< inline >] deliver_to_subscribers
sound/core/seq/seq_clientmgr.c:679
[<ffffffff84fb5509>] snd_seq_deliver_event+0x5a9/0x800
sound/core/seq/seq_clientmgr.c:817
[<ffffffff84fb6466>] snd_seq_kernel_client_dispatch+0x126/0x170
sound/core/seq/seq_clientmgr.c:2401
[<ffffffff84fc1e62>] snd_seq_system_broadcast+0xb2/0xf0
sound/core/seq/seq_system.c:101
[<ffffffff84fb248e>] snd_seq_create_kernel_client+0x21e/0x300
sound/core/seq/seq_clientmgr.c:2280
[< inline >] snd_virmidi_dev_attach_seq
sound/core/seq/seq_virmidi.c:372
[<ffffffff84fdc55f>] snd_virmidi_dev_register+0x29f/0x750
sound/core/seq/seq_virmidi.c:439
[<ffffffff84f824ec>] snd_rawmidi_dev_register+0x30c/0xd40
sound/core/rawmidi.c:1589
[<ffffffff84f33583>] __snd_device_register.part.0+0x63/0xc0
sound/core/device.c:164
[< inline >] __snd_device_register sound/core/device.c:162
[<ffffffff84f341ed>] snd_device_register_all+0xad/0x110
sound/core/device.c:212
[<ffffffff84f277cf>] snd_card_register+0xef/0x6a0 sound/core/init.c:749
[<ffffffff84fefc0f>] snd_virmidi_probe+0x3ef/0x590
sound/drivers/virmidi.c:123
[<ffffffff832feb6c>] platform_drv_probe+0x8c/0x160
drivers/base/platform.c:562
[< inline >] really_probe drivers/base/dd.c:377
[<ffffffff832f862e>] driver_probe_device+0x37e/0xc90
drivers/base/dd.c:499
[<ffffffff832f926e>] __device_attach_driver+0x19e/0x250
drivers/base/dd.c:584
[<ffffffff832f23df>] bus_for_each_drv+0x13f/0x1d0 drivers/base/bus.c:464
[<ffffffff832f809f>] __device_attach+0x1ef/0x2e0 drivers/base/dd.c:641
[<ffffffff832f93ba>] device_initial_probe+0x1a/0x20 drivers/base/dd.c:688
[<ffffffff832f5939>] bus_probe_device+0x1e9/0x290 drivers/base/bus.c:558
[<ffffffff832ef11b>] device_add+0x84b/0x1490 drivers/base/core.c:1120
[<ffffffff832fe2c9>] platform_device_add+0x389/0x790
drivers/base/platform.c:403
[<ffffffff832fff26>] platform_device_register_full+0x396/0x4c0
drivers/base/platform.c:535
[< inline >] platform_device_register_resndata
include/linux/platform_device.h:111
[< inline >] platform_device_register_simple
include/linux/platform_device.h:140
[<ffffffff884146d1>] alsa_card_virmidi_init+0x104/0x1da
sound/drivers/virmidi.c:172
[<ffffffff81002259>] do_one_initcall+0x159/0x380 init/main.c:794
[< inline >] do_initcall_level init/main.c:859
[< inline >] do_initcalls init/main.c:867
[< inline >] do_basic_setup init/main.c:885
[<ffffffff88315c1a>] kernel_init_freeable+0x474/0x52d init/main.c:1010
[<ffffffff86312683>] kernel_init+0x13/0x150 init/main.c:936
[<ffffffff86336fef>] ret_from_fork+0x3f/0x70
arch/x86/entry/entry_64.S:468

-> #0 (register_mutex#5){+.+.+.}:
[< inline >] check_prev_add kernel/locking/lockdep.c:1853
[< inline >] check_prevs_add kernel/locking/lockdep.c:1958
[< inline >] validate_chain kernel/locking/lockdep.c:2144
[<ffffffff8145742b>] __lock_acquire+0x31eb/0x4700
kernel/locking/lockdep.c:3206
[<ffffffff8145ad8c>] lock_acquire+0x1dc/0x430
kernel/locking/lockdep.c:3585
[< inline >] __mutex_lock_common kernel/locking/mutex.c:518
[<ffffffff8632cf61>] mutex_lock_nested+0xb1/0xa50
kernel/locking/mutex.c:618
[<ffffffff84f889cb>] snd_rawmidi_kernel_open+0x4b/0x260
sound/core/rawmidi.c:341
[<ffffffff84fdda77>] midisynth_subscribe+0xf7/0x340
sound/core/seq/seq_midi.c:188
[<ffffffff84fc2a4e>] subscribe_port.isra.2+0x14e/0x2b0
sound/core/seq/seq_ports.c:426
[<ffffffff84fc4dd0>] snd_seq_port_connect+0x490/0x840
sound/core/seq/seq_ports.c:527
[<ffffffff84fb6f54>] snd_seq_ioctl_subscribe_port+0x1c4/0x290
sound/core/seq/seq_clientmgr.c:1464
[<ffffffff84fb126d>] snd_seq_do_ioctl+0x19d/0x1c0
sound/core/seq/seq_clientmgr.c:2209
[<ffffffff84fb2c2b>] snd_seq_kernel_client_ctl+0xdb/0x170
sound/core/seq/seq_clientmgr.c:2423
[<ffffffff84fd69e4>] snd_seq_oss_midi_open+0x3b4/0x610
sound/core/seq/oss/seq_oss_midi.c:375
[<ffffffff84fd6ccb>] snd_seq_oss_midi_open_all+0x8b/0xd0
sound/core/seq/oss/seq_oss_midi.c:306
[<ffffffff84fca1d5>] snd_seq_oss_open+0x5c5/0x8d0
sound/core/seq/oss/seq_oss_init.c:276
[<ffffffff84fc903a>] odev_open+0x6a/0x90 sound/core/seq/oss/seq_oss.c:138
[<ffffffff84f2279f>] soundcore_open+0x30f/0x640 sound/sound_core.c:639
[<ffffffff817be1fa>] chrdev_open+0x22a/0x4c0 fs/char_dev.c:388
[<ffffffff817a9c02>] do_dentry_open+0x6a2/0xcb0 fs/open.c:736
[<ffffffff817ad2db>] vfs_open+0x17b/0x1f0 fs/open.c:853
[< inline >] do_last fs/namei.c:3254
[<ffffffff817e00d9>] path_openat+0xde9/0x5e30 fs/namei.c:3386
[<ffffffff817e895e>] do_filp_open+0x18e/0x250 fs/namei.c:3421
[<ffffffff817ada5c>] do_sys_open+0x1fc/0x420 fs/open.c:1022
[< inline >] SYSC_open fs/open.c:1040
[<ffffffff817adcad>] SyS_open+0x2d/0x40 fs/open.c:1035
[<ffffffff86336c36>] entry_SYSCALL_64_fastpath+0x16/0x7a
arch/x86/entry/entry_64.S:185

other info that might help us debug this:

Chain exists of:
register_mutex#5 --> &grp->list_mutex --> &grp->list_mutex/1

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&grp->list_mutex/1);
lock(&grp->list_mutex);
lock(&grp->list_mutex/1);
lock(register_mutex#5);

*** DEADLOCK ***

3 locks held by syz-executor/21025:
#0: (register_mutex#4){+.+.+.}, at: [<ffffffff84fc902f>]
odev_open+0x5f/0x90 sound/core/seq/oss/seq_oss.c:137
#1: (&grp->list_mutex){++++.+}, at: [<ffffffff84fc4ae2>]
snd_seq_port_connect+0x1a2/0x840 sound/core/seq/seq_ports.c:505
#2: (&grp->list_mutex/1){+.+...}, at: [<ffffffff84fc4afa>]
snd_seq_port_connect+0x1ba/0x840 sound/core/seq/seq_ports.c:506

stack backtrace:
CPU: 2 PID: 21025 Comm: syz-executor Not tainted 4.4.0+ #276
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
00000000ffffffff ffff88006d0af0c0 ffffffff82999e2d ffffffff88fa1eb0
ffffffff88fa2f90 ffffffff88fa2060 ffff88006d0af110 ffffffff81450658
ffff880065c8df00 ffff880065c8e79a 0000000000000000 ffff880065c8e778
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffff82999e2d>] dump_stack+0x6f/0xa2 lib/dump_stack.c:50
[<ffffffff81450658>] print_circular_bug+0x288/0x340
kernel/locking/lockdep.c:1226
[< inline >] check_prev_add kernel/locking/lockdep.c:1853
[< inline >] check_prevs_add kernel/locking/lockdep.c:1958
[< inline >] validate_chain kernel/locking/lockdep.c:2144
[<ffffffff8145742b>] __lock_acquire+0x31eb/0x4700 kernel/locking/lockdep.c:3206
[<ffffffff8145ad8c>] lock_acquire+0x1dc/0x430 kernel/locking/lockdep.c:3585
[< inline >] __mutex_lock_common kernel/locking/mutex.c:518
[<ffffffff8632cf61>] mutex_lock_nested+0xb1/0xa50 kernel/locking/mutex.c:618
[<ffffffff84f889cb>] snd_rawmidi_kernel_open+0x4b/0x260
sound/core/rawmidi.c:341
[<ffffffff84fdda77>] midisynth_subscribe+0xf7/0x340
sound/core/seq/seq_midi.c:188
[<ffffffff84fc2a4e>] subscribe_port.isra.2+0x14e/0x2b0
sound/core/seq/seq_ports.c:426
[<ffffffff84fc4dd0>] snd_seq_port_connect+0x490/0x840
sound/core/seq/seq_ports.c:527
[<ffffffff84fb6f54>] snd_seq_ioctl_subscribe_port+0x1c4/0x290
sound/core/seq/seq_clientmgr.c:1464
[<ffffffff84fb126d>] snd_seq_do_ioctl+0x19d/0x1c0
sound/core/seq/seq_clientmgr.c:2209
[<ffffffff84fb2c2b>] snd_seq_kernel_client_ctl+0xdb/0x170
sound/core/seq/seq_clientmgr.c:2423
[<ffffffff84fd69e4>] snd_seq_oss_midi_open+0x3b4/0x610
sound/core/seq/oss/seq_oss_midi.c:375
[<ffffffff84fd6ccb>] snd_seq_oss_midi_open_all+0x8b/0xd0
sound/core/seq/oss/seq_oss_midi.c:306
[<ffffffff84fca1d5>] snd_seq_oss_open+0x5c5/0x8d0
sound/core/seq/oss/seq_oss_init.c:276
[<ffffffff84fc903a>] odev_open+0x6a/0x90 sound/core/seq/oss/seq_oss.c:138
[<ffffffff84f2279f>] soundcore_open+0x30f/0x640 sound/sound_core.c:639
[<ffffffff817be1fa>] chrdev_open+0x22a/0x4c0 fs/char_dev.c:388
[<ffffffff817a9c02>] do_dentry_open+0x6a2/0xcb0 fs/open.c:736
[<ffffffff817ad2db>] vfs_open+0x17b/0x1f0 fs/open.c:853
[< inline >] do_last fs/namei.c:3254
[<ffffffff817e00d9>] path_openat+0xde9/0x5e30 fs/namei.c:3386
[<ffffffff817e895e>] do_filp_open+0x18e/0x250 fs/namei.c:3421
[<ffffffff817ada5c>] do_sys_open+0x1fc/0x420 fs/open.c:1022
[< inline >] SYSC_open fs/open.c:1040
[<ffffffff817adcad>] SyS_open+0x2d/0x40 fs/open.c:1035
[<ffffffff86336c36>] entry_SYSCALL_64_fastpath+0x16/0x7a
arch/x86/entry/entry_64.S:185

On commit 30f05309bde49295e02e45c7e615f73aa4e0ccc2.

Takashi Iwai

unread,
Jan 25, 2016, 5:47:07 AM1/25/16
to Dmitry Vyukov, alsa-...@alsa-project.org, Jie Yang, Mark Brown, Jaroslav Kysela, LKML, Alexander Potapenko, Kostya Serebryany, syzkaller, Sasha Levin
This looks like a false-positive report to me. Of course, we should
annotate the mutex there for nested locks, though.


thanks,

Takashi

Dmitry Vyukov

unread,
Feb 2, 2016, 4:24:15 PM2/2/16
to Takashi Iwai, alsa-...@alsa-project.org, Jie Yang, Mark Brown, Jaroslav Kysela, LKML, Alexander Potapenko, Kostya Serebryany, syzkaller, Sasha Levin
Takashi, can you please annotate it for lockdep? I hit it on every run.
Thanks in advance

Takashi Iwai

unread,
Feb 3, 2016, 2:47:52 AM2/3/16
to Dmitry Vyukov, alsa-...@alsa-project.org, Jie Yang, Mark Brown, Jaroslav Kysela, LKML, Alexander Potapenko, Kostya Serebryany, syzkaller, Sasha Levin
The lock had an annotation but alas it didn't seem enough.
In anyway, it's not good to have double locks if it's avoidable. So I
worked on it now, and below is the current result of the hack.

The change became a bit more intrusive than wished, but it should be
still simple enough. I put this on top of topic/core-fixes branch.


thanks,

Takashi

-- 8< --
From: Takashi Iwai <ti...@suse.de>
Subject: [PATCH] ALSA: seq: Fix lockdep warnings due to double mutex locks

The port subscription code uses double mutex locks for source and
destination ports, and this may become racy once when wrongly set up.
It leads to lockdep warning splat, typically triggered by fuzzer like
syzkaller, although the actual deadlock hasn't been seen, so far.

This patch simplifies the handling by reducing to two single locks, so
that no lockdep warning will be trigger any longer.

By splitting to two actions, a still-in-progress element shall be
added in one list while handling another. For ignoring this element,
a new check is added in deliver_to_subscribers().

Along with it, the code to add/remove the subscribers list element was
cleaned up and refactored.

Cc: <sta...@vger.kernel.org>
Signed-off-by: Takashi Iwai <ti...@suse.de>
---
sound/core/seq/seq_clientmgr.c | 3 +
sound/core/seq/seq_ports.c | 233 +++++++++++++++++++++++------------------
2 files changed, 133 insertions(+), 103 deletions(-)

diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index 13cfa815732d..58e79e02f217 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -678,6 +678,9 @@ static int deliver_to_subscribers(struct snd_seq_client *client,
else
down_read(&grp->list_mutex);
list_for_each_entry(subs, &grp->list_head, src_list) {
+ /* both ports ready? */
+ if (atomic_read(&subs->ref_count) != 2)
+ continue;
event->dest = subs->info.dest;
if (subs->info.flags & SNDRV_SEQ_PORT_SUBS_TIMESTAMP)
/* convert time according to flag with subscription */
diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
index 55170a20ae72..921fb2bd8fad 100644
--- a/sound/core/seq/seq_ports.c
+++ b/sound/core/seq/seq_ports.c
@@ -173,10 +173,6 @@ struct snd_seq_client_port *snd_seq_create_port(struct snd_seq_client *client,
}

/* */
-enum group_type {
- SRC_LIST, DEST_LIST
-};
-
static int subscribe_port(struct snd_seq_client *client,
struct snd_seq_client_port *port,
struct snd_seq_port_subs_info *grp,
@@ -203,6 +199,20 @@ static struct snd_seq_client_port *get_client_port(struct snd_seq_addr *addr,
return NULL;
}

+static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+ struct snd_seq_client_port *port,
+ struct snd_seq_subscribers *subs,
+ bool is_src, bool ack);
+
+static inline struct snd_seq_subscribers *
+get_subscriber(struct list_head *p, bool is_src)
+{
+ if (is_src)
+ return list_entry(p, struct snd_seq_subscribers, src_list);
+ else
+ return list_entry(p, struct snd_seq_subscribers, dest_list);
+}
+
/*
* remove all subscribers on the list
* this is called from port_delete, for each src and dest list.
@@ -210,7 +220,7 @@ static struct snd_seq_client_port *get_client_port(struct snd_seq_addr *addr,
static void clear_subscriber_list(struct snd_seq_client *client,
struct snd_seq_client_port *port,
struct snd_seq_port_subs_info *grp,
- int grptype)
+ int is_src)
{
struct list_head *p, *n;

@@ -219,15 +229,13 @@ static void clear_subscriber_list(struct snd_seq_client *client,
struct snd_seq_client *c;
struct snd_seq_client_port *aport;

- if (grptype == SRC_LIST) {
- subs = list_entry(p, struct snd_seq_subscribers, src_list);
+ subs = get_subscriber(p, is_src);
+ if (is_src)
aport = get_client_port(&subs->info.dest, &c);
- } else {
- subs = list_entry(p, struct snd_seq_subscribers, dest_list);
+ else
aport = get_client_port(&subs->info.sender, &c);
- }
- list_del(p);
- unsubscribe_port(client, port, grp, &subs->info, 0);
+ delete_and_unsubscribe_port(client, port, subs, is_src, false);
+
if (!aport) {
/* looks like the connected port is being deleted.
* we decrease the counter, and when both ports are deleted
@@ -235,21 +243,14 @@ static void clear_subscriber_list(struct snd_seq_client *client,
*/
if (atomic_dec_and_test(&subs->ref_count))
kfree(subs);
- } else {
- /* ok we got the connected port */
- struct snd_seq_port_subs_info *agrp;
- agrp = (grptype == SRC_LIST) ? &aport->c_dest : &aport->c_src;
- down_write(&agrp->list_mutex);
- if (grptype == SRC_LIST)
- list_del(&subs->dest_list);
- else
- list_del(&subs->src_list);
- up_write(&agrp->list_mutex);
- unsubscribe_port(c, aport, agrp, &subs->info, 1);
- kfree(subs);
- snd_seq_port_unlock(aport);
- snd_seq_client_unlock(c);
+ continue;
}
+
+ /* ok we got the connected port */
+ delete_and_unsubscribe_port(c, aport, subs, !is_src, true);
+ kfree(subs);
+ snd_seq_port_unlock(aport);
+ snd_seq_client_unlock(c);
}
}

@@ -262,8 +263,8 @@ static int port_delete(struct snd_seq_client *client,
snd_use_lock_sync(&port->use_lock);

/* clear subscribers info */
- clear_subscriber_list(client, port, &port->c_src, SRC_LIST);
- clear_subscriber_list(client, port, &port->c_dest, DEST_LIST);
+ clear_subscriber_list(client, port, &port->c_src, true);
+ clear_subscriber_list(client, port, &port->c_dest, false);

if (port->private_free)
port->private_free(port->private_data);
@@ -479,85 +480,120 @@ static int match_subs_info(struct snd_seq_port_subscribe *r,
return 0;
}

-
-/* connect two ports */
-int snd_seq_port_connect(struct snd_seq_client *connector,
- struct snd_seq_client *src_client,
- struct snd_seq_client_port *src_port,
- struct snd_seq_client *dest_client,
- struct snd_seq_client_port *dest_port,
- struct snd_seq_port_subscribe *info)
+static int check_and_subscribe_port(struct snd_seq_client *client,
+ struct snd_seq_client_port *port,
+ struct snd_seq_subscribers *subs,
+ bool is_src, bool exclusive, bool ack)
{
- struct snd_seq_port_subs_info *src = &src_port->c_src;
- struct snd_seq_port_subs_info *dest = &dest_port->c_dest;
- struct snd_seq_subscribers *subs, *s;
- int err, src_called = 0;
- unsigned long flags;
- int exclusive;
+ struct snd_seq_port_subs_info *grp;
+ struct list_head *p;
+ struct snd_seq_subscribers *s;
+ int err;

- subs = kzalloc(sizeof(*subs), GFP_KERNEL);
- if (! subs)
- return -ENOMEM;
-
- subs->info = *info;
- atomic_set(&subs->ref_count, 2);
-
- down_write(&src->list_mutex);
- down_write_nested(&dest->list_mutex, SINGLE_DEPTH_NESTING);
-
- exclusive = info->flags & SNDRV_SEQ_PORT_SUBS_EXCLUSIVE ? 1 : 0;
+ grp = is_src ? &port->c_src : &port->c_dest;
err = -EBUSY;
+ down_write(&grp->list_mutex);
if (exclusive) {
- if (! list_empty(&src->list_head) || ! list_empty(&dest->list_head))
+ if (!list_empty(&grp->list_head))
goto __error;
} else {
- if (src->exclusive || dest->exclusive)
+ if (grp->exclusive)
goto __error;
/* check whether already exists */
- list_for_each_entry(s, &src->list_head, src_list) {
- if (match_subs_info(info, &s->info))
- goto __error;
- }
- list_for_each_entry(s, &dest->list_head, dest_list) {
- if (match_subs_info(info, &s->info))
+ list_for_each(p, &grp->list_head) {
+ s = get_subscriber(p, is_src);
+ if (match_subs_info(&subs->info, &s->info))
goto __error;
}
}

- if ((err = subscribe_port(src_client, src_port, src, info,
- connector->number != src_client->number)) < 0)
- goto __error;
- src_called = 1;
-
- if ((err = subscribe_port(dest_client, dest_port, dest, info,
- connector->number != dest_client->number)) < 0)
+ err = subscribe_port(client, port, grp, &subs->info, ack);
+ if (err < 0) {
+ grp->exclusive = 0;
goto __error;
+ }

/* add to list */
- write_lock_irqsave(&src->list_lock, flags);
- // write_lock(&dest->list_lock); // no other lock yet
- list_add_tail(&subs->src_list, &src->list_head);
- list_add_tail(&subs->dest_list, &dest->list_head);
- // write_unlock(&dest->list_lock); // no other lock yet
- write_unlock_irqrestore(&src->list_lock, flags);
+ write_lock_irq(&grp->list_lock);
+ if (is_src)
+ list_add_tail(&subs->src_list, &grp->list_head);
+ else
+ list_add_tail(&subs->dest_list, &grp->list_head);
+ grp->exclusive = exclusive;
+ atomic_inc(&subs->ref_count);
+ write_unlock_irq(&grp->list_lock);
+ err = 0;
+
+ __error:
+ up_write(&grp->list_mutex);
+ return err;
+}

- src->exclusive = dest->exclusive = exclusive;
+static void delete_and_unsubscribe_port(struct snd_seq_client *client,
+ struct snd_seq_client_port *port,
+ struct snd_seq_subscribers *subs,
+ bool is_src, bool ack)
+{
+ struct snd_seq_port_subs_info *grp;
+
+ grp = is_src ? &port->c_src : &port->c_dest;
+ down_write(&grp->list_mutex);
+ write_lock_irq(&grp->list_lock);
+ if (is_src)
+ list_del(&subs->src_list);
+ else
+ list_del(&subs->dest_list);
+ grp->exclusive = 0;
+ write_unlock_irq(&grp->list_lock);
+ up_write(&grp->list_mutex);
+
+ unsubscribe_port(client, port, grp, &subs->info, ack);
+}
+
+/* connect two ports */
+int snd_seq_port_connect(struct snd_seq_client *connector,
+ struct snd_seq_client *src_client,
+ struct snd_seq_client_port *src_port,
+ struct snd_seq_client *dest_client,
+ struct snd_seq_client_port *dest_port,
+ struct snd_seq_port_subscribe *info)
+{
+ struct snd_seq_subscribers *subs;
+ bool exclusive;
+ int err;
+
+ subs = kzalloc(sizeof(*subs), GFP_KERNEL);
+ if (!subs)
+ return -ENOMEM;
+
+ subs->info = *info;
+ atomic_set(&subs->ref_count, 0);
+ INIT_LIST_HEAD(&subs->src_list);
+ INIT_LIST_HEAD(&subs->dest_list);
+
+ exclusive = !!(info->flags & SNDRV_SEQ_PORT_SUBS_EXCLUSIVE);
+
+ err = check_and_subscribe_port(src_client, src_port, subs, true,
+ exclusive,
+ connector->number != src_client->number);
+ if (err < 0)
+ goto error;
+ err = check_and_subscribe_port(dest_client, dest_port, subs, false,
+ exclusive,
+ connector->number != dest_client->number);
+ if (err < 0)
+ goto error_dest;

- up_write(&dest->list_mutex);
- up_write(&src->list_mutex);
return 0;

- __error:
- if (src_called)
- unsubscribe_port(src_client, src_port, src, info,
- connector->number != src_client->number);
+ error_dest:
+ delete_and_unsubscribe_port(src_client, src_port, subs, true,
+ connector->number != src_client->number);
+ error:
kfree(subs);
- up_write(&dest->list_mutex);
- up_write(&src->list_mutex);
return err;
}

-
/* remove the connection */
int snd_seq_port_disconnect(struct snd_seq_client *connector,
struct snd_seq_client *src_client,
@@ -567,37 +603,28 @@ int snd_seq_port_disconnect(struct snd_seq_client *connector,
struct snd_seq_port_subscribe *info)
{
struct snd_seq_port_subs_info *src = &src_port->c_src;
- struct snd_seq_port_subs_info *dest = &dest_port->c_dest;
struct snd_seq_subscribers *subs;
int err = -ENOENT;
- unsigned long flags;

down_write(&src->list_mutex);
- down_write_nested(&dest->list_mutex, SINGLE_DEPTH_NESTING);
-
/* look for the connection */
list_for_each_entry(subs, &src->list_head, src_list) {
if (match_subs_info(info, &subs->info)) {
- write_lock_irqsave(&src->list_lock, flags);
- // write_lock(&dest->list_lock); // no lock yet
- list_del(&subs->src_list);
- list_del(&subs->dest_list);
- // write_unlock(&dest->list_lock);
- write_unlock_irqrestore(&src->list_lock, flags);
- src->exclusive = dest->exclusive = 0;
- unsubscribe_port(src_client, src_port, src, info,
- connector->number != src_client->number);
- unsubscribe_port(dest_client, dest_port, dest, info,
- connector->number != dest_client->number);
- kfree(subs);
+ atomic_dec(&subs->ref_count); /* mark as not ready */
err = 0;
break;
}
}
-
- up_write(&dest->list_mutex);
up_write(&src->list_mutex);
- return err;
+ if (err < 0)
+ return err;
+
+ delete_and_unsubscribe_port(src_client, src_port, subs, true,
+ connector->number != src_client->number);
+ delete_and_unsubscribe_port(dest_client, dest_port, subs, false,
+ connector->number != dest_client->number);
+ kfree(subs);
+ return 0;
}


--
2.7.0

Dmitry Vyukov

unread,
Feb 3, 2016, 8:22:38 AM2/3/16
to Takashi Iwai, alsa-...@alsa-project.org, Jie Yang, Mark Brown, Jaroslav Kysela, LKML, Alexander Potapenko, Kostya Serebryany, syzkaller, Sasha Levin
On Wed, Feb 3, 2016 at 8:47 AM, Takashi Iwai <ti...@suse.de> wrote:
>> > This looks like a false-positive report to me. Of course, we should
>> > annotate the mutex there for nested locks, though.
>>
>>
>> Takashi, can you please annotate it for lockdep? I hit it on every run.
>
> The lock had an annotation but alas it didn't seem enough.
> In anyway, it's not good to have double locks if it's avoidable. So I
> worked on it now, and below is the current result of the hack.
>
> The change became a bit more intrusive than wished, but it should be
> still simple enough. I put this on top of topic/core-fixes branch.


I don't see the deadlock reports now. Thanks!

Takashi Iwai

unread,
Feb 3, 2016, 9:41:59 AM2/3/16
to Dmitry Vyukov, alsa-...@alsa-project.org, Jie Yang, Mark Brown, Jaroslav Kysela, LKML, Alexander Potapenko, Kostya Serebryany, syzkaller, Sasha Levin
On Wed, 03 Feb 2016 14:22:18 +0100,
Dmitry Vyukov wrote:
>
> On Wed, Feb 3, 2016 at 8:47 AM, Takashi Iwai <ti...@suse.de> wrote:
> >> > This looks like a false-positive report to me. Of course, we should
> >> > annotate the mutex there for nested locks, though.
> >>
> >>
> >> Takashi, can you please annotate it for lockdep? I hit it on every run.
> >
> > The lock had an annotation but alas it didn't seem enough.
> > In anyway, it's not good to have double locks if it's avoidable. So I
> > worked on it now, and below is the current result of the hack.
> >
> > The change became a bit more intrusive than wished, but it should be
> > still simple enough. I put this on top of topic/core-fixes branch.
>
>
> I don't see the deadlock reports now. Thanks!

Good to hear, now queued for the next pull request.
Thanks for quick tests!


Takashi
Reply all
Reply to author
Forward
0 new messages