[syzbot] [net?] unregister_netdevice: waiting for DEV to become free (8)

36 views
Skip to first unread message

syzbot

unread,
Jun 9, 2023, 9:34:59 PM6/9/23
to ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linux-...@vger.kernel.org, net...@vger.kernel.org, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, syzkall...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 67faabbde36b selftests/bpf: Add missing prototypes for sev..
git tree: bpf-next
console+strace: https://syzkaller.appspot.com/x/log.txt?x=1381363b280000
kernel config: https://syzkaller.appspot.com/x/.config?x=5335204dcdecfda
dashboard link: https://syzkaller.appspot.com/bug?extid=881d65229ca4f9ae8c84
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=132faf93280000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=10532add280000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/751a0490d875/disk-67faabbd.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/2c5106cd9f1f/vmlinux-67faabbd.xz
kernel image: https://storage.googleapis.com/syzbot-assets/62c1154294e4/bzImage-67faabbd.xz

The issue was bisected to:

commit ad2f99aedf8fa77f3ae647153284fa63c43d3055
Author: Arnd Bergmann <ar...@arndb.de>
Date: Tue Jul 27 13:45:16 2021 +0000

net: bridge: move bridge ioctls out of .ndo_do_ioctl

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=146de6f1280000
final oops: https://syzkaller.appspot.com/x/report.txt?x=166de6f1280000
console output: https://syzkaller.appspot.com/x/log.txt?x=126de6f1280000

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+881d65...@syzkaller.appspotmail.com
Fixes: ad2f99aedf8f ("net: bridge: move bridge ioctls out of .ndo_do_ioctl")

unregister_netdevice: waiting for bridge0 to become free. Usage count = 2
leaked reference.
__netdev_tracker_alloc include/linux/netdevice.h:4070 [inline]
netdev_hold include/linux/netdevice.h:4099 [inline]
dev_ifsioc+0xbc0/0xeb0 net/core/dev_ioctl.c:408
dev_ioctl+0x250/0x1090 net/core/dev_ioctl.c:605
sock_do_ioctl+0x15a/0x230 net/socket.c:1215
sock_ioctl+0x1f8/0x680 net/socket.c:1318
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__x64_sys_ioctl+0x197/0x210 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
For information about bisection process see: https://goo.gl/tpsmEJ#bisection

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

Ziqi Zhao

unread,
Jun 21, 2023, 3:38:57 AM6/21/23
to syzbot+881d65...@syzkaller.appspotmail.com, ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linux-...@vger.kernel.org, net...@vger.kernel.org, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, syzkall...@googlegroups.com, sk...@linuxfoundation.org, ivan.or...@gmail.com
Hi all,

I'm taking a look at this bug as part of the exercice for the Linux
Kernel Bug Fixing Summer 2023 program. Thanks to the help from my
mentor, Ivan Orlov and Shuah Khan, I've already obtained a reproduction
of the issue using the provided C reproducer, and I should be able to
submit a patch by the end of this week to fix the highlighted error. If
you have any information or suggestions, please feel free to reply to
this thread. Any help would be greatly appreciated!

Best regards,
Ziqi

Dongliang Mu

unread,
Jun 21, 2023, 4:50:11 AM6/21/23
to Ziqi Zhao, syzbot+881d65...@syzkaller.appspotmail.com, ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, ku...@kernel.org, linux-...@vger.kernel.org, net...@vger.kernel.org, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, syzkall...@googlegroups.com, sk...@linuxfoundation.org, ivan.or...@gmail.com
On Wed, Jun 21, 2023 at 3:38 PM 'Ziqi Zhao' via syzkaller-bugs
<syzkall...@googlegroups.com> wrote:
>
> Hi all,
>
> I'm taking a look at this bug as part of the exercice for the Linux
> Kernel Bug Fixing Summer 2023 program. Thanks to the help from my

This is an interesting program. There are many kernel crashes on the
syzbot dashboard, which needs help.

> mentor, Ivan Orlov and Shuah Khan, I've already obtained a reproduction
> of the issue using the provided C reproducer, and I should be able to
> submit a patch by the end of this week to fix the highlighted error. If
> you have any information or suggestions, please feel free to reply to
> this thread. Any help would be greatly appreciated!

Please carefully read the guidance of submitting patches to linux
kernel [1]. Be careful about your coding style before sending.

Note that, Syzbot has the feature: patch testing. You can upload and
test your own patch to confirm that your patch is working properly.

[1] https://docs.kernel.org/process/submitting-patches.html
>
> Best regards,
> Ziqi
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller-bug...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/20230621070710.380373-1-astrajoan%40yahoo.com.

Ziqi Zhao

unread,
Jun 26, 2023, 1:50:26 AM6/26/23
to mudongl...@gmail.com, ar...@arndb.de, astr...@yahoo.com, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, ivan.or...@gmail.com, ku...@kernel.org, linux-...@vger.kernel.org, net...@vger.kernel.org, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, sk...@linuxfoundation.org, syzbot+881d65...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
The following 3 locks would race against each other, causing the
deadlock situation in the Syzbot bug report:

- j1939_socks_lock
- active_session_list_lock
- sk_session_queue_lock

A reasonable fix is to change j1939_socks_lock to an rwlock, since in
the rare situations where a write lock is required for the linked list
that j1939_socks_lock is protecting, the code does not attempt to
acquire any more locks. This would break the circular lock dependency,
where, for example, the current thread already locks j1939_socks_lock
and attempts to acquire sk_session_queue_lock, and at the same time,
another thread attempts to acquire j1939_socks_lock while holding
sk_session_queue_lock.

NOTE: This patch along does not fix the unregister_netdevice bug
reported by Syzbot; instead, it solves a deadlock situation to prepare
for one or more further patches to actually fix the Syzbot bug, which
appears to be a reference counting problem within the j1939 codebase.

Signed-off-by: Ziqi Zhao <astr...@yahoo.com>
---
net/can/j1939/j1939-priv.h | 2 +-
net/can/j1939/main.c | 2 +-
net/can/j1939/socket.c | 25 +++++++++++++------------
3 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
index 16af1a7f80f6..74f15592d170 100644
--- a/net/can/j1939/j1939-priv.h
+++ b/net/can/j1939/j1939-priv.h
@@ -86,7 +86,7 @@ struct j1939_priv {
unsigned int tp_max_packet_size;

/* lock for j1939_socks list */
- spinlock_t j1939_socks_lock;
+ rwlock_t j1939_socks_lock;
struct list_head j1939_socks;

struct kref rx_kref;
diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
index ecff1c947d68..a6fb89fa6278 100644
--- a/net/can/j1939/main.c
+++ b/net/can/j1939/main.c
@@ -274,7 +274,7 @@ struct j1939_priv *j1939_netdev_start(struct net_device *ndev)
return ERR_PTR(-ENOMEM);

j1939_tp_init(priv);
- spin_lock_init(&priv->j1939_socks_lock);
+ rwlock_init(&priv->j1939_socks_lock);
INIT_LIST_HEAD(&priv->j1939_socks);

mutex_lock(&j1939_netdev_lock);
diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
index 35970c25496a..6dce9d645116 100644
--- a/net/can/j1939/socket.c
+++ b/net/can/j1939/socket.c
@@ -80,16 +80,16 @@ static void j1939_jsk_add(struct j1939_priv *priv, struct j1939_sock *jsk)
jsk->state |= J1939_SOCK_BOUND;
j1939_priv_get(priv);

- spin_lock_bh(&priv->j1939_socks_lock);
+ write_lock_bh(&priv->j1939_socks_lock);
list_add_tail(&jsk->list, &priv->j1939_socks);
- spin_unlock_bh(&priv->j1939_socks_lock);
+ write_unlock_bh(&priv->j1939_socks_lock);
}

static void j1939_jsk_del(struct j1939_priv *priv, struct j1939_sock *jsk)
{
- spin_lock_bh(&priv->j1939_socks_lock);
+ write_lock_bh(&priv->j1939_socks_lock);
list_del_init(&jsk->list);
- spin_unlock_bh(&priv->j1939_socks_lock);
+ write_unlock_bh(&priv->j1939_socks_lock);

j1939_priv_put(priv);
jsk->state &= ~J1939_SOCK_BOUND;
@@ -329,13 +329,13 @@ bool j1939_sk_recv_match(struct j1939_priv *priv, struct j1939_sk_buff_cb *skcb)
struct j1939_sock *jsk;
bool match = false;

- spin_lock_bh(&priv->j1939_socks_lock);
+ read_lock_bh(&priv->j1939_socks_lock);
list_for_each_entry(jsk, &priv->j1939_socks, list) {
match = j1939_sk_recv_match_one(jsk, skcb);
if (match)
break;
}
- spin_unlock_bh(&priv->j1939_socks_lock);
+ read_unlock_bh(&priv->j1939_socks_lock);

return match;
}
@@ -344,11 +344,11 @@ void j1939_sk_recv(struct j1939_priv *priv, struct sk_buff *skb)
{
struct j1939_sock *jsk;

- spin_lock_bh(&priv->j1939_socks_lock);
+ read_lock_bh(&priv->j1939_socks_lock);
list_for_each_entry(jsk, &priv->j1939_socks, list) {
j1939_sk_recv_one(jsk, skb);
}
- spin_unlock_bh(&priv->j1939_socks_lock);
+ read_unlock_bh(&priv->j1939_socks_lock);
}

static void j1939_sk_sock_destruct(struct sock *sk)
@@ -484,6 +484,7 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)

priv = j1939_netdev_start(ndev);
dev_put(ndev);
+
if (IS_ERR(priv)) {
ret = PTR_ERR(priv);
goto out_release_sock;
@@ -1078,12 +1079,12 @@ void j1939_sk_errqueue(struct j1939_session *session,
}

/* spread RX notifications to all sockets subscribed to this session */
- spin_lock_bh(&priv->j1939_socks_lock);
+ read_lock_bh(&priv->j1939_socks_lock);
list_for_each_entry(jsk, &priv->j1939_socks, list) {
if (j1939_sk_recv_match_one(jsk, &session->skcb))
__j1939_sk_errqueue(session, &jsk->sk, type);
}
- spin_unlock_bh(&priv->j1939_socks_lock);
+ read_unlock_bh(&priv->j1939_socks_lock);
};

void j1939_sk_send_loop_abort(struct sock *sk, int err)
@@ -1271,7 +1272,7 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
struct j1939_sock *jsk;
int error_code = ENETDOWN;

- spin_lock_bh(&priv->j1939_socks_lock);
+ read_lock_bh(&priv->j1939_socks_lock);
list_for_each_entry(jsk, &priv->j1939_socks, list) {
jsk->sk.sk_err = error_code;
if (!sock_flag(&jsk->sk, SOCK_DEAD))
@@ -1279,7 +1280,7 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)

j1939_sk_queue_drop_all(priv, jsk, error_code);
}
- spin_unlock_bh(&priv->j1939_socks_lock);
+ read_unlock_bh(&priv->j1939_socks_lock);
}

static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
--
2.34.1

Ziqi Zhao

unread,
Aug 19, 2023, 2:56:15 AM8/19/23
to syzbot+881d65...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, Ziqi Zhao
#syz test:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

In the bug reported by Syzbot, certain bridge devices would have a
leaked reference created by race conditions in dev_ioctl, specifically,
under SIOCBRADDIF or SIOCBRDELIF operations. The reference leak would
be shown in the periodic unregister_netdevice call, which throws a
warning and cause Syzbot to report a crash. Upon inspection of the
logic in dev_ioctl, it seems the reference was introduced to ensure
proper access to the bridge device after rtnl_unlock. and the latter
function is necessary to maintain the following lock order in any
bridge related ioctl calls:

1) br_ioctl_mutex => 2) rtnl_lock

Conceptually, though, br_ioctl_mutex could be considered more specific
than rtnl_lock given their usages, hence swapping their order would be
a reasonable proposal. This patch changes all related call sites to
maintain the reversed order of the two locks:

1) rtnl_lock => 2) br_ioctl_mutex

By doing so, the extra reference introduced in dev_ioctl is no longer
needed, and hence the reference leak bug is now resolved.

Reported-by: syzbot+881d65...@syzkaller.appspotmail.com
Fixes: ad2f99aedf8f ("net: bridge: move bridge ioctls out of .ndo_do_ioctl")
Signed-off-by: Ziqi Zhao <astr...@yahoo.com>
---
net/bridge/br_ioctl.c | 4 ----
net/core/dev_ioctl.c | 8 +-------
net/socket.c | 2 ++
3 files changed, 3 insertions(+), 11 deletions(-)

diff --git a/net/bridge/br_ioctl.c b/net/bridge/br_ioctl.c
index f213ed108361..291dbc5d2a99 100644
--- a/net/bridge/br_ioctl.c
+++ b/net/bridge/br_ioctl.c
@@ -399,8 +399,6 @@ int br_ioctl_stub(struct net *net, struct net_bridge *br, unsigned int cmd,
{
int ret = -EOPNOTSUPP;

- rtnl_lock();
-
switch (cmd) {
case SIOCGIFBR:
case SIOCSIFBR:
@@ -434,7 +432,5 @@ int br_ioctl_stub(struct net *net, struct net_bridge *br, unsigned int cmd,
break;
}

- rtnl_unlock();
-
return ret;
}
diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
index 3730945ee294..17df956df8cb 100644
--- a/net/core/dev_ioctl.c
+++ b/net/core/dev_ioctl.c
@@ -336,7 +336,6 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, void __user *data,
int err;
struct net_device *dev = __dev_get_by_name(net, ifr->ifr_name);
const struct net_device_ops *ops;
- netdevice_tracker dev_tracker;

if (!dev)
return -ENODEV;
@@ -405,12 +404,7 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, void __user *data,
return -ENODEV;
if (!netif_is_bridge_master(dev))
return -EOPNOTSUPP;
- netdev_hold(dev, &dev_tracker, GFP_KERNEL);
- rtnl_unlock();
- err = br_ioctl_call(net, netdev_priv(dev), cmd, ifr, NULL);
- netdev_put(dev, &dev_tracker);
- rtnl_lock();
- return err;
+ return br_ioctl_call(net, netdev_priv(dev), cmd, ifr, NULL);

case SIOCDEVPRIVATE ... SIOCDEVPRIVATE + 15:
return dev_siocdevprivate(dev, ifr, data, cmd);
diff --git a/net/socket.c b/net/socket.c
index 2b0e54b2405c..6b7a9df9a326 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -1258,7 +1258,9 @@ static long sock_ioctl(struct file *file, unsigned cmd, unsigned long arg)
case SIOCSIFBR:
case SIOCBRADDBR:
case SIOCBRDELBR:
+ rtnl_lock();
err = br_ioctl_call(net, NULL, cmd, NULL, argp);
+ rtnl_unlock();
break;
case SIOCGIFVLAN:
case SIOCSIFVLAN:
--
2.34.1

syzbot

unread,
Aug 19, 2023, 3:19:27 AM8/19/23
to astr...@yahoo.com, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch and the reproducer did not trigger any issue:

Reported-and-tested-by: syzbot+881d65...@syzkaller.appspotmail.com

Tested on:

commit: d4ddefee Merge tag 'arm64-fixes' of git://git.kernel.o..
git tree: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
console output: https://syzkaller.appspot.com/x/log.txt?x=1242856ba80000
kernel config: https://syzkaller.appspot.com/x/.config?x=9c37cc0e4fcc5f8d
dashboard link: https://syzkaller.appspot.com/bug?extid=881d65229ca4f9ae8c84
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=10f9a727a80000

Note: testing is done by a robot and is best-effort only.

Ziqi Zhao

unread,
Aug 19, 2023, 4:11:05 AM8/19/23
to ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, f.fai...@gmail.com, ivan.or...@gmail.com, kees...@chromium.org, ku...@kernel.org, hkall...@gmail.com, mudongl...@gmail.com, nik...@nvidia.com, pab...@redhat.com, ra...@blackwall.org, ro...@nvidia.com, sk...@linuxfoundation.org, syzbot+881d65...@syzkaller.appspotmail.com, vladimi...@nxp.com, linux-...@vger.kernel.org, net...@vger.kernel.org, syzkall...@googlegroups.com, Ziqi Zhao
In the bug reported by Syzbot, certain bridge devices would have a
leaked reference created by race conditions in dev_ioctl, specifically,
under SIOCBRADDIF or SIOCBRDELIF operations. The reference leak would
be shown in the periodic unregister_netdevice call, which throws a
warning and cause Syzbot to report a crash. Upon inspection of the
logic in dev_ioctl, it seems the reference was introduced to ensure
proper access to the bridge device after rtnl_unlock. and the latter
function is necessary to maintain the following lock order in any
bridge related ioctl calls:

1) br_ioctl_mutex => 2) rtnl_lock

Conceptually, though, br_ioctl_mutex could be considered more specific
than rtnl_lock given their usages, hence swapping their order would be
a reasonable proposal. This patch changes all related call sites to
maintain the reversed order of the two locks:

1) rtnl_lock => 2) br_ioctl_mutex

By doing so, the extra reference introduced in dev_ioctl is no longer
needed, and hence the reference leak bug is now resolved.

Reported-by: syzbot+881d65...@syzkaller.appspotmail.com
Fixes: ad2f99aedf8f ("net: bridge: move bridge ioctls out of .ndo_do_ioctl")

Nikolay Aleksandrov

unread,
Aug 19, 2023, 5:33:32 PM8/19/23
to Ziqi Zhao, ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, f.fai...@gmail.com, ivan.or...@gmail.com, kees...@chromium.org, ku...@kernel.org, hkall...@gmail.com, mudongl...@gmail.com, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, sk...@linuxfoundation.org, syzbot+881d65...@syzkaller.appspotmail.com, vladimi...@nxp.com, linux-...@vger.kernel.org, net...@vger.kernel.org, syzkall...@googlegroups.com
Hi Ziqi,
On 8/19/23 11:10, Ziqi Zhao wrote:
> In the bug reported by Syzbot, certain bridge devices would have a
> leaked reference created by race conditions in dev_ioctl, specifically,
> under SIOCBRADDIF or SIOCBRDELIF operations. The reference leak would

How would it leak a reference, could you elaborate?
The reference is always taken and always released after the call.

> be shown in the periodic unregister_netdevice call, which throws a
> warning and cause Syzbot to report a crash. Upon inspection of the

If you reproduced it, is the device later removed or is it really stuck?

> logic in dev_ioctl, it seems the reference was introduced to ensure
> proper access to the bridge device after rtnl_unlock. and the latter
> function is necessary to maintain the following lock order in any
> bridge related ioctl calls:
>
> 1) br_ioctl_mutex => 2) rtnl_lock
>
> Conceptually, though, br_ioctl_mutex could be considered more specific
> than rtnl_lock given their usages, hence swapping their order would be
> a reasonable proposal. This patch changes all related call sites to
> maintain the reversed order of the two locks:
>
> 1) rtnl_lock => 2) br_ioctl_mutex
>
> By doing so, the extra reference introduced in dev_ioctl is no longer
> needed, and hence the reference leak bug is now resolved.

IIRC there was no bug, it was a false-positive. The reference is held a
bit longer but then released, so the device is deleted later.
I might be remembering wrong, but I think I briefly looked into this
when it was reported. If that's not the case I'd be interested to see
a new report/trace because the bug might be somewhere else.

>
> Reported-by: syzbot+881d65...@syzkaller.appspotmail.com
> Fixes: ad2f99aedf8f ("net: bridge: move bridge ioctls out of .ndo_do_ioctl")
> Signed-off-by: Ziqi Zhao <astr...@yahoo.com>
> ---
> net/bridge/br_ioctl.c | 4 ----
> net/core/dev_ioctl.c | 8 +-------
> net/socket.c | 2 ++
> 3 files changed, 3 insertions(+), 11 deletions(-)
>

Thanks,
Nik


Ziqi Zhao

unread,
Aug 19, 2023, 6:50:53 PM8/19/23
to Nikolay Aleksandrov, ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, f.fai...@gmail.com, ivan.or...@gmail.com, kees...@chromium.org, ku...@kernel.org, hkall...@gmail.com, mudongl...@gmail.com, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, sk...@linuxfoundation.org, syzbot+881d65...@syzkaller.appspotmail.com, vladimi...@nxp.com, linux-...@vger.kernel.org, net...@vger.kernel.org, syzkall...@googlegroups.com
On Sat, Aug 19, 2023 at 12:25:15PM +0300, Nikolay Aleksandrov wrote:
Hi Nik,

Thank you so much for reviewing the patch and getting back to me!

> IIRC there was no bug, it was a false-positive. The reference is held a bit
> longer but then released, so the device is deleted later.

> If you reproduced it, is the device later removed or is it really stuck?

I ran the reproducer again without the patch and it seems you are
correct. It was trying to create a very short-lived bridge, then delete
it immediately in the next call. The device in question "wpan4" never
showed up when I polled with `ip link` in the VM, so I'd say it did not
get stuck.

> How would it leak a reference, could you elaborate?
> The reference is always taken and always released after the call.

This was where I got a bit confused too. The system had a timeout of
140 seconds for the unregister_netdevice check. If the bridge in
question was created and deleted repeatedly, the warning would indeed
not be an actual reference leak. But how could its reference show up
after 140 seconds if the bridge's creation and deletion were all within
a couple of milliseconds?

So I let the system run for a bit longer with the reproducer, and after
~200 seconds, the kernel crashed and complained that some tasks had
been waiting for too long (more than 143 seconds) trying to get hold of
the br_ioctl_mutex. This was also quite strange to me, since on the
surface it definitely looked like a deadlock, but the strict locking
order as I described previously should prevent any deadlocks from
happening.

Anyways, I decided to test switching up the lock order, since both the
refcnt warning and the task stall seemed closely related to the above
mentioned locks. When I ran the reproducer again after the patch, both
the warning and the stall issue went away. So I guess the patch is
still relevant in preventing bugs in some extreme cases -- although the
scenario created by the reproducer would probably never happen in real
usages?

Please let me know whether you have any thoughts on how the above
issues were triggered, and what other information I could gather to
further demystify this bug. Thank you again for your help!

Best regards,
Ziqi

Nikolay Aleksandrov

unread,
Aug 22, 2023, 6:40:49 AM8/22/23
to Ziqi Zhao, ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, f.fai...@gmail.com, ivan.or...@gmail.com, kees...@chromium.org, ku...@kernel.org, hkall...@gmail.com, mudongl...@gmail.com, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, sk...@linuxfoundation.org, syzbot+881d65...@syzkaller.appspotmail.com, vladimi...@nxp.com, linux-...@vger.kernel.org, net...@vger.kernel.org, syzkall...@googlegroups.com
Thank you for testing, but we really need to understand what is going on
and why the device isn't getting deleted for so long. Currently I don't
have the time to debug it properly (I'll be able to next week at the
earliest). We can't apply the patch based only on tests without
understanding the underlying issue. I'd look into what
the reproducer is doing exactly and also check the system state while
the deadlock has happened. Also you can list the currently held locks
(if CONFIG_LOCKDEP is enabled) via magic sysrq + d for example. See
which process is holding them, what are their priorities and so on.
Try to build some theory of how a deadlock might happen and then go
about proving it. Does the 8021q module have the same problem? It uses
similar code to set its hook.

Ziqi Zhao

unread,
Aug 23, 2023, 5:38:51 AM8/23/23
to Nikolay Aleksandrov, ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, f.fai...@gmail.com, ivan.or...@gmail.com, kees...@chromium.org, ku...@kernel.org, hkall...@gmail.com, mudongl...@gmail.com, nik...@nvidia.com, pab...@redhat.com, ro...@nvidia.com, sk...@linuxfoundation.org, syzbot+881d65...@syzkaller.appspotmail.com, vladimi...@nxp.com, linux-...@vger.kernel.org, net...@vger.kernel.org, syzkall...@googlegroups.com
On Tue, Aug 22, 2023 at 01:40:45PM +0300, Nikolay Aleksandrov wrote:

> Thank you for testing, but we really need to understand what is going on and
> why the device isn't getting deleted for so long. Currently I don't have the
> time to debug it properly (I'll be able to next week at the earliest). We
> can't apply the patch based only on tests without understanding the
> underlying issue. I'd look into what
> the reproducer is doing exactly and also check the system state while the
> deadlock has happened. Also you can list the currently held locks (if
> CONFIG_LOCKDEP is enabled) via magic sysrq + d for example. See which
> process is holding them, what are their priorities and so on.
> Try to build some theory of how a deadlock might happen and then go
> about proving it. Does the 8021q module have the same problem? It uses
> similar code to set its hook.

Hi Nik,

Thank you so much for the instructions! I was able to obtain a decoded
stacktrace showing the reproducer behavior in my QEMU VM running kernel
6.5-rc4, in case that would give us more context for pinpointing the
problem. Here's a link to the output:

https://pastecat.io/?p=IlKZlflN9j2Z2mspjKe7

Basically, after running the reproducer (line 1854) for about 180
seconnds or so, the unregister_netdevice warning was shown (line 1856),
and then after another 50 seconds, the kernel detected that some tasks
have been stalled for more than 143 seconds (line 1866), so it panicked
on the blocked tasks (line 2116). Before the panic, we did get to see
all the locks held in the system (line 2068), and it did show that many
processes created by the reproducer were contending the br_ioctl_mutex.
I'm now starting to wonder whether this is really a deadlock, or simply
some tasks not being able to grab the lock because so many processes
are trying to acquire it.

Let me know what you think about the situation shown in the above log,
and let's keep in touch for any future debugging. Thank you again for
guiding me through the problem!

Best regards,
Ziqi

Alexander Ofitserov

unread,
Feb 12, 2024, 4:20:07 PMFeb 12
to astr...@yahoo.com, ar...@arndb.de, bri...@lists.linux-foundation.org, da...@davemloft.net, edum...@google.com, f.fai...@gmail.com, hkall...@gmail.com, ivan.or...@gmail.com, kees...@chromium.org, ku...@kernel.org, linux-...@vger.kernel.org, mudongl...@gmail.com, net...@vger.kernel.org, nik...@nvidia.com, pab...@redhat.com, ra...@blackwall.org, ro...@nvidia.com, sk...@linuxfoundation.org, syzbot+881d65...@syzkaller.appspotmail.com, syzkall...@googlegroups.com, vladimi...@nxp.com, dut...@altlinux.org, Alexander Ofitserov
Hello,

I've also encountered this bug while fuzzing. Is there any going work on this
bug?


--
2.42.1

Alexander Ofitserov

unread,
Feb 12, 2024, 4:20:08 PMFeb 12
to Ziqi Zhao, Nikolay Aleksandrov, f.fai...@gmail.com, ivan.or...@gmail.com, kees...@chromium.org, ar...@arndb.de, vladimi...@nxp.com, bri...@lists.linux-foundation.org, syzkall...@googlegroups.com, mudongl...@gmail.com, linux-...@vger.kernel.org, edum...@google.com, net...@vger.kernel.org, nik...@nvidia.com, ro...@nvidia.com, syzbot+881d65...@syzkaller.appspotmail.com, ku...@kernel.org, sk...@linuxfoundation.org, pab...@redhat.com, da...@davemloft.net, hkall...@gmail.com


On Wed, Aug 23, 2023 at 00:38:46PM +0300, Ziqi Zhao wrote:
Reply all
Reply to author
Forward
0 new messages