INFO: task hung in __get_super

121 views
Skip to first unread message

syzbot

unread,
Apr 1, 2018, 1:08:02 PM4/1/18
to linux-...@vger.kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com, vi...@zeniv.linux.org.uk
Hello,

syzbot hit the following crash on upstream commit
10b84daddbec72c6b440216a69de9a9605127f7a (Sat Mar 31 17:59:00 2018 +0000)
Merge branch 'perf-urgent-for-linus' of
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
syzbot dashboard link:
https://syzkaller.appspot.com/bug?extid=10007d66ca02b08f0e60

Unfortunately, I don't have any reproducer for this crash yet.
Raw console output:
https://syzkaller.appspot.com/x/log.txt?id=5899419228569600
Kernel config:
https://syzkaller.appspot.com/x/.config?id=-2760467897697295172
compiler: gcc (GCC) 7.1.1 20170620

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+10007d...@syzkaller.appspotmail.com
It will help syzbot understand when the bug is fixed. See footer for
details.
If you forward the report, please keep this part and the footer.

IPv6: ADDRCONF(NETDEV_UP): veth0: link is not ready
IPv6: ADDRCONF(NETDEV_UP): veth1: link is not ready
IPv6: ADDRCONF(NETDEV_CHANGE): veth1: link becomes ready
IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready
random: crng init done
INFO: task syz-executor3:13421 blocked for more than 120 seconds.
Not tainted 4.16.0-rc7+ #9
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor3 D24672 13421 4481 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2862 [inline]
__schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440
schedule+0xf5/0x430 kernel/sched/core.c:3499
__rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:269 [inline]
rwsem_down_read_failed+0x401/0x6e0 kernel/locking/rwsem-xadd.c:286
call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
__down_read arch/x86/include/asm/rwsem.h:83 [inline]
down_read+0xa4/0x150 kernel/locking/rwsem.c:26
__get_super.part.9+0x1d3/0x280 fs/super.c:663
__get_super include/linux/spinlock.h:310 [inline]
get_super+0x2d/0x40 fs/super.c:692
fsync_bdev+0x19/0x80 fs/block_dev.c:468
invalidate_partition+0x35/0x60 block/genhd.c:1566
drop_partitions.isra.12+0xcd/0x1d0 block/partition-generic.c:440
rescan_partitions+0x72/0x900 block/partition-generic.c:513
__blkdev_reread_part+0x15f/0x1e0 block/ioctl.c:173
blkdev_reread_part+0x26/0x40 block/ioctl.c:193
loop_reread_partitions+0x12f/0x1a0 drivers/block/loop.c:619
loop_set_status+0x9bb/0xf60 drivers/block/loop.c:1161
loop_set_status64+0x9d/0x110 drivers/block/loop.c:1271
lo_ioctl+0xd86/0x1b70 drivers/block/loop.c:1381
__blkdev_driver_ioctl block/ioctl.c:303 [inline]
blkdev_ioctl+0x1759/0x1e00 block/ioctl.c:601
block_ioctl+0xde/0x120 fs/block_dev.c:1875
vfs_ioctl fs/ioctl.c:46 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:686
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:692
do_syscall_64+0x281/0x940 arch/x86/entry/common.c:287
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x454e79
RSP: 002b:00007fda691eec68 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fda691ef6d4 RCX: 0000000000454e79
RDX: 00000000200001c0 RSI: 0000000000004c04 RDI: 0000000000000013
RBP: 000000000072bea0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 0000000000000287 R14: 00000000006f5d48 R15: 0000000000000000

Showing all locks held in the system:
2 locks held by khungtaskd/878:
#0: (rcu_read_lock){....}, at: [<000000004cf2ddac>]
check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline]
#0: (rcu_read_lock){....}, at: [<000000004cf2ddac>] watchdog+0x1c5/0xd60
kernel/hung_task.c:249
#1: (tasklist_lock){.+.+}, at: [<00000000fc5e2248>]
debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470
2 locks held by getty/4404:
#0: (&tty->ldisc_sem){++++}, at: [<00000000c5139392>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<000000003da58a6e>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4405:
#0: (&tty->ldisc_sem){++++}, at: [<00000000c5139392>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<000000003da58a6e>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4406:
#0: (&tty->ldisc_sem){++++}, at: [<00000000c5139392>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<000000003da58a6e>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4407:
#0: (&tty->ldisc_sem){++++}, at: [<00000000c5139392>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<000000003da58a6e>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4408:
#0: (&tty->ldisc_sem){++++}, at: [<00000000c5139392>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<000000003da58a6e>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4409:
#0: (&tty->ldisc_sem){++++}, at: [<00000000c5139392>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<000000003da58a6e>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4410:
#0: (&tty->ldisc_sem){++++}, at: [<00000000c5139392>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<000000003da58a6e>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
3 locks held by syz-executor3/13421:
#0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000834f78af>]
lo_ioctl+0x8b/0x1b70 drivers/block/loop.c:1355
#1: (&bdev->bd_mutex){+.+.}, at: [<0000000003605603>]
blkdev_reread_part+0x1e/0x40 block/ioctl.c:192
#2: (&type->s_umount_key#77){.+.+}, at: [<0000000077701649>]
__get_super.part.9+0x1d3/0x280 fs/super.c:663
1 lock held by syz-executor3/13464:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000c39e77db>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
1 lock held by syz-executor3/13466:
#0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000834f78af>]
lo_ioctl+0x8b/0x1b70 drivers/block/loop.c:1355
1 lock held by syz-executor2/13423:
#0: (&bdev->bd_mutex){+.+.}, at: [<0000000032c86bf7>]
blkdev_put+0x2a/0x4f0 fs/block_dev.c:1808
2 locks held by syz-executor0/13428:
#0: (&type->s_umount_key#76/1){+.+.}, at: [<00000000d25ba33a>]
alloc_super fs/super.c:211 [inline]
#0: (&type->s_umount_key#76/1){+.+.}, at: [<00000000d25ba33a>]
sget_userns+0x3a1/0xe40 fs/super.c:502
#1: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000834f78af>]
lo_ioctl+0x8b/0x1b70 drivers/block/loop.c:1355
1 lock held by syz-executor0/13465:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000c39e77db>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
1 lock held by blkid/13434:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000c39e77db>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
1 lock held by syz-executor2/13638:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000c39e77db>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
1 lock held by syz-executor2/13639:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000c39e77db>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 878 Comm: khungtaskd Not tainted 4.16.0-rc7+ #9
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x194/0x24d lib/dump_stack.c:53
nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103
nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline]
check_hung_task kernel/hung_task.c:132 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline]
watchdog+0x90c/0xd60 kernel/hung_task.c:249
kthread+0x33c/0x400 kernel/kthread.c:238
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt+0x6/0x10
arch/x86/include/asm/irqflags.h:54


---
This bug is generated by a dumb bot. It may contain errors.
See https://goo.gl/tpsmEJ for details.
Direct all questions to syzk...@googlegroups.com.

syzbot will keep track of this bug report.
If you forgot to add the Reported-by tag, once the fix for this bug is
merged
into any tree, please reply to this email with:
#syz fix: exact-commit-title
To mark this as a duplicate of another syzbot report, please reply with:
#syz dup: exact-subject-of-another-report
If it's a one-off invalid bug report, please reply with:
#syz invalid
Note: if the crash happens again, it will cause creation of a new bug
report.
Note: all commands must start from beginning of the line in the email body.

Tetsuo Handa

unread,
Jun 19, 2018, 7:45:08 AM6/19/18
to syzbot, syzkall...@googlegroups.com, linux-...@vger.kernel.org, linux-...@vger.kernel.org, vi...@zeniv.linux.org.uk
This bug report is getting no feedback, but I guess that this bug is in
block or mm or locking layer rather than fs layer.

NMI backtrace for this bug tends to report that sb_bread() from fill_super()
from mount_bdev() is stalling is the cause of keep holding s_umount_key for
more than 120 seconds. What is strange is that NMI backtrace for this bug tends
to point at rcu_read_lock()/pagecache_get_page()/radix_tree_deref_slot()/
rcu_read_unlock() which is expected not to stall.

Since CONFIG_RCU_CPU_STALL_TIMEOUT is set to 120 (and actually +5 due to
CONFIG_PROVE_RCU=y) which is longer than CONFIG_DEFAULT_HUNG_TASK_TIMEOUT,
maybe setting CONFIG_RCU_CPU_STALL_TIMEOUT to smaller values (e.g. 25) can
give us some hints...

Dmitry Vyukov

unread,
Jun 19, 2018, 7:53:52 AM6/19/18
to Tetsuo Handa, syzbot, syzkaller-bugs, linux-fsdevel, LKML, Al Viro
If an rcu stall is the true root cause of this, then I guess would see
"rcu stall" bug too. Rcu stall is detected after 120 seconds, but task
hang after 120-240 seconds. So rcu stall has much higher chances to be
detected. Do you see the corresponding "rcu stall" bug?

But, yes, we need to tune all timeouts. There is
https://github.com/google/syzkaller/issues/516 for this.
We also need "kernel/hung_task.c: allow to set checking interval
separately from timeout" to be merged:
https://groups.google.com/forum/#!topic/syzkaller/rOr3WBE-POY
as currently it's very hard to tune task hung timeout. But maybe we
will need similar patches for other watchdogs too if they have the
same problem.

Tetsuo Handa

unread,
Jun 19, 2018, 10:10:43 AM6/19/18
to Dmitry Vyukov, syzbot, syzkaller-bugs, linux-fsdevel, LKML, Al Viro
On 2018/06/19 20:53, Dmitry Vyukov wrote:
> On Tue, Jun 19, 2018 at 1:44 PM, Tetsuo Handa
> <penguin...@i-love.sakura.ne.jp> wrote:
>> This bug report is getting no feedback, but I guess that this bug is in
>> block or mm or locking layer rather than fs layer.
>>
>> NMI backtrace for this bug tends to report that sb_bread() from fill_super()
>> from mount_bdev() is stalling is the cause of keep holding s_umount_key for
>> more than 120 seconds. What is strange is that NMI backtrace for this bug tends
>> to point at rcu_read_lock()/pagecache_get_page()/radix_tree_deref_slot()/
>> rcu_read_unlock() which is expected not to stall.
>>
>> Since CONFIG_RCU_CPU_STALL_TIMEOUT is set to 120 (and actually +5 due to
>> CONFIG_PROVE_RCU=y) which is longer than CONFIG_DEFAULT_HUNG_TASK_TIMEOUT,
>> maybe setting CONFIG_RCU_CPU_STALL_TIMEOUT to smaller values (e.g. 25) can
>> give us some hints...
>
> If an rcu stall is the true root cause of this, then I guess would see
> "rcu stall" bug too. Rcu stall is detected after 120 seconds, but task
> hang after 120-240 seconds. So rcu stall has much higher chances to be
> detected. Do you see the corresponding "rcu stall" bug?

RCU stall is detected after 125 seconds due to CONFIG_PROVE_RCU=y
(e.g. https://syzkaller.appspot.com/bug?id=1fac0fd91219f3f2a03d6fa7deafc95fbed79cc2 ).

I didn't find the corresponding "rcu stall" bug. But it is not required
that one RCU stall takes longer than 120 seconds.

down(); // Will take 120 seconds due to multiple RCU stalls
rcu_read_lock():
do_something();
rcu_read_unlock(): // Took 30 seconds for unknown reason.
rcu_read_lock():
do_something();
rcu_read_unlock(): // Took 30 seconds for unknown reason.
rcu_read_lock():
do_something();
rcu_read_unlock(): // Took 30 seconds for unknown reason.
rcu_read_lock():
do_something();
rcu_read_unlock(): // Took 30 seconds for unknown reason.
up();

Dmitry Vyukov

unread,
Jun 19, 2018, 10:16:05 AM6/19/18
to Tetsuo Handa, syzbot, syzkaller-bugs, linux-fsdevel, LKML, Al Viro
On Tue, Jun 19, 2018 at 4:10 PM, Tetsuo Handa
You think this is another false positive?
Like this one https://github.com/google/syzkaller/issues/516#issuecomment-395685629
?

Tetsuo Handa

unread,
Jun 19, 2018, 9:15:52 PM6/19/18
to Dmitry Vyukov, syzbot, syzkaller-bugs, linux-fsdevel, LKML, Al Viro
According to https://syzkaller.appspot.com/text?tag=CrashLog&x=11db16c4400000 from
"INFO: rcu detected stall in __process_echoes":

[ 859.630022] INFO: rcu_sched self-detected stall on CPU
[ 859.635509] 0-....: (124999 ticks this GP) idle=30e/1/4611686018427387906 softirq=287964/287964 fqs=31234
[ 859.645716] (t=125000 jiffies g=156333 c=156332 q=555)
(...snipped...)
[ 860.266660] ? process_one_work+0x1ba0/0x1ba0
[ 860.271135] ? kthread_bind+0x40/0x40
[ 860.274927] ret_from_fork+0x3a/0x50
[ 861.152252] INFO: task kworker/u4:2:59 blocked for more than 120 seconds.
[ 861.159245] Not tainted 4.18.0-rc1+ #109
[ 861.163851] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.

RCU stall was reported immediately before khungtaskd fires. Since syzbot assigns
only 2 CPUs, it might not be rare case that a hung process was unable to run just
because somebody else kept occupying CPU resources.

Well, "BUG: soft lockup in __process_echoes" will be a dup of
"INFO: rcu detected stall in __process_echoes". I wonder why
softlockup detector waited for 135 seconds.

Well, "BUG: soft lockup in shrink_dcache_parent (2)" and
"BUG: soft lockup in snd_virmidi_output_trigger" and
"BUG: soft lockup in smp_call_function_many" and
"BUG: soft lockup in do_raw_spin_unlock (2)" as well waited for 134 seconds
while "BUG: soft lockup in d_walk" waited for only 22 seconds...

Anyway, I think that in some cases RCU stalls/soft lockups are the cause of hung tasks.

syzbot

unread,
Apr 28, 2019, 2:14:07 PM4/28/19
to ax...@kernel.dk, dvy...@google.com, ja...@suse.cz, linux-...@vger.kernel.org, linux-...@vger.kernel.org, penguin...@i-love.sakura.ne.jp, penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com, vi...@zeniv.linux.org.uk
syzbot has found a reproducer for the following crash on:

HEAD commit: 037904a2 Merge branch 'x86-urgent-for-linus' of git://git...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=135ff034a00000
kernel config: https://syzkaller.appspot.com/x/.config?x=a42d110b47dd6b36
dashboard link: https://syzkaller.appspot.com/bug?extid=10007d66ca02b08f0e60
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1291b1f4a00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=135385a8a00000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+10007d...@syzkaller.appspotmail.com

INFO: task syz-executor274:8097 blocked for more than 143 seconds.
Not tainted 5.1.0-rc6+ #89
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor274 D28008 8097 8041 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2877 [inline]
__schedule+0x813/0x1cc0 kernel/sched/core.c:3518
schedule+0x92/0x180 kernel/sched/core.c:3562
__rwsem_down_read_failed_common kernel/locking/rwsem-xadd.c:285 [inline]
rwsem_down_read_failed+0x213/0x420 kernel/locking/rwsem-xadd.c:302
call_rwsem_down_read_failed+0x18/0x30 arch/x86/lib/rwsem.S:94
__down_read arch/x86/include/asm/rwsem.h:83 [inline]
down_read+0x49/0x90 kernel/locking/rwsem.c:26
__get_super.part.0+0x203/0x2e0 fs/super.c:788
__get_super include/linux/spinlock.h:329 [inline]
get_super+0x2e/0x50 fs/super.c:817
fsync_bdev+0x19/0xd0 fs/block_dev.c:525
invalidate_partition+0x36/0x60 block/genhd.c:1581
drop_partitions block/partition-generic.c:443 [inline]
rescan_partitions+0xef/0xa20 block/partition-generic.c:516
__blkdev_reread_part+0x1a2/0x230 block/ioctl.c:173
blkdev_reread_part+0x27/0x40 block/ioctl.c:193
loop_reread_partitions+0x1c/0x40 drivers/block/loop.c:633
loop_set_status+0xe57/0x1380 drivers/block/loop.c:1296
loop_set_status64+0xc2/0x120 drivers/block/loop.c:1416
lo_ioctl+0x8fc/0x2150 drivers/block/loop.c:1559
__blkdev_driver_ioctl block/ioctl.c:303 [inline]
blkdev_ioctl+0x6f2/0x1d10 block/ioctl.c:605
block_ioctl+0xee/0x130 fs/block_dev.c:1933
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:509 [inline]
do_vfs_ioctl+0xd6e/0x1390 fs/ioctl.c:696
ksys_ioctl+0xab/0xd0 fs/ioctl.c:713
__do_sys_ioctl fs/ioctl.c:720 [inline]
__se_sys_ioctl fs/ioctl.c:718 [inline]
__x64_sys_ioctl+0x73/0xb0 fs/ioctl.c:718
do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x441937
Code: 48 83 c4 08 48 89 d8 5b 5d c3 66 0f 1f 84 00 00 00 00 00 48 89 e8 48
f7 d8 48 39 c3 0f 92 c0 eb 92 66 90 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff
ff 0f 83 8d 08 fc ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007ffcabed4e08 EFLAGS: 00000202 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000441937
RDX: 00007ffcabed4ea0 RSI: 0000000000004c04 RDI: 0000000000000004
RBP: 0000000000000003 R08: 0000000000000000 R09: 000000000000000a
R10: 0000000000000075 R11: 0000000000000202 R12: 0000000000000001
R13: 0000000000000004 R14: 0000000000000000 R15: 0000000000000000
INFO: task blkid:8099 blocked for more than 143 seconds.
Not tainted 5.1.0-rc6+ #89
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
blkid D27504 8099 8021 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2877 [inline]
__schedule+0x813/0x1cc0 kernel/sched/core.c:3518
schedule+0x92/0x180 kernel/sched/core.c:3562
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:3620
__mutex_lock_common kernel/locking/mutex.c:1002 [inline]
__mutex_lock+0x726/0x1310 kernel/locking/mutex.c:1072
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
blkdev_put+0x34/0x560 fs/block_dev.c:1866
blkdev_close+0x8b/0xb0 fs/block_dev.c:1915
__fput+0x2e5/0x8d0 fs/file_table.c:278
____fput+0x16/0x20 fs/file_table.c:309
task_work_run+0x14a/0x1c0 kernel/task_work.c:113
tracehook_notify_resume include/linux/tracehook.h:188 [inline]
exit_to_usermode_loop+0x273/0x2c0 arch/x86/entry/common.c:166
prepare_exit_to_usermode arch/x86/entry/common.c:197 [inline]
syscall_return_slowpath arch/x86/entry/common.c:268 [inline]
do_syscall_64+0x52d/0x610 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7f1ae9c432b0
Code: Bad RIP value.
RSP: 002b:00007ffc29ff6028 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f1ae9c432b0
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000028 R09: 0000000001680000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000b9d030
R13: 0000000000000000 R14: 0000000000000003 R15: 0000000000000005

Showing all locks held in the system:
1 lock held by khungtaskd/1041:
#0: 0000000027887009 (rcu_read_lock){....}, at:
debug_show_all_locks+0x5f/0x27e kernel/locking/lockdep.c:5057
1 lock held by rs:main Q:Reg/7921:
#0: 00000000e00580d7 (&rq->lock){-.-.}, at: rq_lock
kernel/sched/sched.h:1168 [inline]
#0: 00000000e00580d7 (&rq->lock){-.-.}, at: __schedule+0x1f8/0x1cc0
kernel/sched/core.c:3456
1 lock held by rsyslogd/7923:
#0: 00000000488dcec4 (&f->f_pos_lock){+.+.}, at: __fdget_pos+0xee/0x110
fs/file.c:801
2 locks held by getty/8014:
#0: 000000001b56f3c3 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341
#1: 00000000da9faa8c (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156
2 locks held by getty/8015:
#0: 00000000e00c81bb (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341
#1: 000000008d689a2e (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156
2 locks held by getty/8016:
#0: 00000000177f6359 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341
#1: 0000000096437898 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156
2 locks held by getty/8017:
#0: 000000002db00e12 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341
#1: 0000000071f2d88e (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156
2 locks held by getty/8018:
#0: 00000000a41b6290 (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341
#1: 00000000c340e26f (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156
2 locks held by getty/8019:
#0: 00000000bca104ce (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341
#1: 000000007e045212 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156
2 locks held by getty/8020:
#0: 0000000070fdafae (&tty->ldisc_sem){++++}, at:
ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341
#1: 00000000b4e26fa9 (&ldata->atomic_read_lock){+.+.}, at:
n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156
1 lock held by syz-executor274/8083:
2 locks held by syz-executor274/8097:
#0: 000000007a5ed526 (&bdev->bd_mutex){+.+.}, at:
blkdev_reread_part+0x1f/0x40 block/ioctl.c:192
#1: 0000000067606e21 (&type->s_umount_key#39){.+.+}, at:
__get_super.part.0+0x203/0x2e0 fs/super.c:788
1 lock held by blkid/8099:
#0: 000000007a5ed526 (&bdev->bd_mutex){+.+.}, at: blkdev_put+0x34/0x560
fs/block_dev.c:1866
2 locks held by syz-executor274/11705:
#0: 000000002b6bbb34 (&bdev->bd_mutex){+.+.}, at: __blkdev_put+0xbb/0x810
fs/block_dev.c:1833
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_release+0x1f/0x200
drivers/block/loop.c:1755
2 locks held by syz-executor274/11709:
#0: 00000000a45cb906 (&bdev->bd_mutex){+.+.}, at: __blkdev_put+0xbb/0x810
fs/block_dev.c:1833
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_release+0x1f/0x200
drivers/block/loop.c:1755
2 locks held by syz-executor274/11716:
#0: 00000000a19e2025 (&type->s_umount_key#38/1){+.+.}, at:
alloc_super+0x158/0x890 fs/super.c:228
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_simple_ioctl
drivers/block/loop.c:1514 [inline]
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_ioctl+0x266/0x2150
drivers/block/loop.c:1572
2 locks held by syz-executor274/11717:
#0: 00000000e185c083 (&type->s_umount_key#38/1){+.+.}, at:
alloc_super+0x158/0x890 fs/super.c:228
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_simple_ioctl
drivers/block/loop.c:1514 [inline]
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_ioctl+0x266/0x2150
drivers/block/loop.c:1572
2 locks held by blkid/11718:
#0: 000000003d9e77b2 (&bdev->bd_mutex){+.+.}, at: __blkdev_put+0xbb/0x810
fs/block_dev.c:1833
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: __loop_clr_fd+0x88/0xd60
drivers/block/loop.c:1046
2 locks held by blkid/11720:
#0: 000000000c0297bc (&bdev->bd_mutex){+.+.}, at: __blkdev_put+0xbb/0x810
fs/block_dev.c:1833
#1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_release+0x1f/0x200
drivers/block/loop.c:1755

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 1041 Comm: khungtaskd Not tainted 5.1.0-rc6+ #89
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x172/0x1f0 lib/dump_stack.c:113
nmi_cpu_backtrace.cold+0x63/0xa4 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x1be/0x236 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:204 [inline]
watchdog+0x9b7/0xec0 kernel/hung_task.c:288
kthread+0x357/0x430 kernel/kthread.c:253
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352

Al Viro

unread,
Apr 28, 2019, 2:51:20 PM4/28/19
to syzbot, ax...@kernel.dk, dvy...@google.com, ja...@suse.cz, linux-...@vger.kernel.org, linux-...@vger.kernel.org, penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
ioctl(..., BLKRRPART) blocked on ->s_umount in __get_super().
The trouble is, the only things holding ->s_umount appears to be
these:

> 2 locks held by syz-executor274/11716:
> #0: 00000000a19e2025 (&type->s_umount_key#38/1){+.+.}, at:
> alloc_super+0x158/0x890 fs/super.c:228
> #1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_simple_ioctl
> drivers/block/loop.c:1514 [inline]
> #1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_ioctl+0x266/0x2150
> drivers/block/loop.c:1572

> 2 locks held by syz-executor274/11717:
> #0: 00000000e185c083 (&type->s_umount_key#38/1){+.+.}, at:
> alloc_super+0x158/0x890 fs/super.c:228
> #1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_simple_ioctl
> drivers/block/loop.c:1514 [inline]
> #1: 00000000bde6230e (loop_ctl_mutex){+.+.}, at: lo_ioctl+0x266/0x2150
> drivers/block/loop.c:1572

... and that's bollocks. ->s_umount held there is that on freshly allocated
superblock. It *MUST* be in mount(2); no other syscall should be able to
call alloc_super() in the first place. So what the hell is that doing
trying to call lo_ioctl() inside mount(2)? Something like isofs attempting
cdrom ioctls on the underlying device?

Why do we have loop_func_table->ioctl(), BTW? All in-tree instances are
either NULL or return -EINVAL unconditionally. Considering that the
caller is
err = lo->ioctl ? lo->ioctl(lo, cmd, arg) : -EINVAL;
we could bloody well just get rid of cryptoloop_ioctl() (the only
non-NULL instance) and get rid of calling lo_simple_ioctl() in
lo_ioctl() switch's default.

Something like this:

diff --git a/drivers/block/cryptoloop.c b/drivers/block/cryptoloop.c
index 254ee7d54e91..f16468a562f5 100644
--- a/drivers/block/cryptoloop.c
+++ b/drivers/block/cryptoloop.c
@@ -167,12 +167,6 @@ cryptoloop_transfer(struct loop_device *lo, int cmd,
}

static int
-cryptoloop_ioctl(struct loop_device *lo, int cmd, unsigned long arg)
-{
- return -EINVAL;
-}
-
-static int
cryptoloop_release(struct loop_device *lo)
{
struct crypto_sync_skcipher *tfm = lo->key_data;
@@ -188,7 +182,6 @@ cryptoloop_release(struct loop_device *lo)
static struct loop_func_table cryptoloop_funcs = {
.number = LO_CRYPT_CRYPTOAPI,
.init = cryptoloop_init,
- .ioctl = cryptoloop_ioctl,
.transfer = cryptoloop_transfer,
.release = cryptoloop_release,
.owner = THIS_MODULE
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index bf1c61cab8eb..2ec162b80562 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -955,7 +955,6 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
lo->lo_flags = lo_flags;
lo->lo_backing_file = file;
lo->transfer = NULL;
- lo->ioctl = NULL;
lo->lo_sizelimit = 0;
lo->old_gfp_mask = mapping_gfp_mask(mapping);
mapping_set_gfp_mask(mapping, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
@@ -1064,7 +1063,6 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)

loop_release_xfer(lo);
lo->transfer = NULL;
- lo->ioctl = NULL;
lo->lo_device = NULL;
lo->lo_encryption = NULL;
lo->lo_offset = 0;
@@ -1262,7 +1260,6 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
if (!xfer)
xfer = &none_funcs;
lo->transfer = xfer->transfer;
- lo->ioctl = xfer->ioctl;

if ((lo->lo_flags & LO_FLAGS_AUTOCLEAR) !=
(info->lo_flags & LO_FLAGS_AUTOCLEAR))
@@ -1525,7 +1522,7 @@ static int lo_simple_ioctl(struct loop_device *lo, unsigned int cmd,
err = loop_set_block_size(lo, arg);
break;
default:
- err = lo->ioctl ? lo->ioctl(lo, cmd, arg) : -EINVAL;
+ err = -EINVAL;
}
mutex_unlock(&loop_ctl_mutex);
return err;
@@ -1567,10 +1564,9 @@ static int lo_ioctl(struct block_device *bdev, fmode_t mode,
case LOOP_SET_BLOCK_SIZE:
if (!(mode & FMODE_WRITE) && !capable(CAP_SYS_ADMIN))
return -EPERM;
- /* Fall through */
+ return lo_simple_ioctl(lo, cmd, arg);
default:
- err = lo_simple_ioctl(lo, cmd, arg);
- break;
+ return -EINVAL;
}

return err;
diff --git a/drivers/block/loop.h b/drivers/block/loop.h
index af75a5ee4094..56a9a0c161d7 100644
--- a/drivers/block/loop.h
+++ b/drivers/block/loop.h
@@ -84,7 +84,6 @@ struct loop_func_table {
int (*init)(struct loop_device *, const struct loop_info64 *);
/* release is called from loop_unregister_transfer or clr_fd */
int (*release)(struct loop_device *);
- int (*ioctl)(struct loop_device *, int cmd, unsigned long arg);
struct module *owner;
};

Tetsuo Handa

unread,
Apr 28, 2019, 9:38:49 PM4/28/19
to Al Viro, syzbot, ax...@kernel.dk, dvy...@google.com, ja...@suse.cz, linux-...@vger.kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com
On 2019/04/29 3:51, Al Viro wrote:
> ioctl(..., BLKRRPART) blocked on ->s_umount in __get_super().
> The trouble is, the only things holding ->s_umount appears to be
> these:

Not always true. lockdep_print_held_locks() from debug_show_all_locks() can not
report locks held by TASK_RUNNING threads. Due to enabling CONFIG_PRINTK_CALLER=y,
the output from trigger_all_cpu_backtrace() is no longer included into the report
file (i.e. premature truncation) because NMI backtrace is printed from a different
printk() context. If we check the console output, we can understand that

>> 1 lock held by syz-executor274/8083:

was doing mount(2). Since there is a possibility that that thread was looping for
many seconds enough to trigger khungtaskd warnings, we can't tell whether this is
a locking dependency problem.

----------------------------------------
[ 1107.252933][ T1041] NMI backtrace for cpu 0
[ 1107.257402][ T1041] CPU: 0 PID: 1041 Comm: khungtaskd Not tainted 5.1.0-rc6+ #89
[ 1107.264960][ T1041] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
[ 1107.275380][ T1041] Call Trace:
[ 1107.278691][ T1041] dump_stack+0x172/0x1f0
[ 1107.283216][ T1041] nmi_cpu_backtrace.cold+0x63/0xa4
[ 1107.288469][ T1041] ? lapic_can_unplug_cpu.cold+0x38/0x38
[ 1107.294155][ T1041] nmi_trigger_cpumask_backtrace+0x1be/0x236
[ 1107.300256][ T1041] arch_trigger_cpumask_backtrace+0x14/0x20
[ 1107.306174][ T1041] watchdog+0x9b7/0xec0
[ 1107.310362][ T1041] kthread+0x357/0x430
[ 1107.314446][ T1041] ? reset_hung_task_detector+0x30/0x30
[ 1107.320016][ T1041] ? kthread_cancel_delayed_work_sync+0x20/0x20
[ 1107.326280][ T1041] ret_from_fork+0x3a/0x50
[ 1107.331403][ T1041] Sending NMI from CPU 0 to CPUs 1:
[ 1107.337617][ C1] NMI backtrace for cpu 1
[ 1107.337625][ C1] CPU: 1 PID: 8083 Comm: syz-executor274 Not tainted 5.1.0-rc6+ #89
[ 1107.337631][ C1] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
[ 1107.337636][ C1] RIP: 0010:debug_lockdep_rcu_enabled.part.0+0xb/0x60
[ 1107.337648][ C1] Code: 5b 5d c3 e8 67 71 e5 ff 0f 1f 80 00 00 00 00 55 48 89 e5 e8 37 ff ff ff 5d c3 0f 1f 44 00 00 48 b8 00 00 00 00 00 fc ff df 55 <48> 89 e5 53 65 48 8b 1c 25 00 ee 01 00 48 8d bb 7c 08 00 00 48 89
[ 1107.337652][ C1] RSP: 0018:ffff8880a85274c8 EFLAGS: 00000202
[ 1107.337661][ C1] RAX: dffffc0000000000 RBX: ffff8880a85275d8 RCX: 1ffffffff12bcd63
[ 1107.337666][ C1] RDX: 0000000000000000 RSI: ffffffff870d8f3c RDI: ffff8880a85275e0
[ 1107.337672][ C1] RBP: ffff8880a85274d8 R08: ffff888081e68540 R09: ffffed1015d25bc8
[ 1107.337677][ C1] R10: ffffed1015d25bc7 R11: ffff8880ae92de3b R12: 0000000000000000
[ 1107.337683][ C1] R13: ffff8880a694d640 R14: ffff88809541b942 R15: 0000000000000006
[ 1107.337689][ C1] FS: 0000000000e0b880(0000) GS:ffff8880ae900000(0000) knlGS:0000000000000000
[ 1107.337693][ C1] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1107.337699][ C1] CR2: ffffffffff600400 CR3: 0000000092d6f000 CR4: 00000000001406e0
[ 1107.337704][ C1] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1107.337710][ C1] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 1107.337713][ C1] Call Trace:
[ 1107.337717][ C1] ? debug_lockdep_rcu_enabled+0x71/0xa0
[ 1107.337721][ C1] xas_descend+0xbf/0x370
[ 1107.337724][ C1] xas_load+0xef/0x150
[ 1107.337728][ C1] find_get_entry+0x13d/0x880
[ 1107.337733][ C1] ? find_get_entries_tag+0xc10/0xc10
[ 1107.337736][ C1] ? mark_held_locks+0xa4/0xf0
[ 1107.337741][ C1] ? pagecache_get_page+0x1a8/0x740
[ 1107.337745][ C1] pagecache_get_page+0x4c/0x740
[ 1107.337749][ C1] __getblk_gfp+0x27e/0x970
[ 1107.337752][ C1] __bread_gfp+0x2f/0x300
[ 1107.337756][ C1] udf_tread+0xf1/0x140
[ 1107.337760][ C1] udf_read_tagged+0x50/0x530
[ 1107.337764][ C1] udf_check_anchor_block+0x1ef/0x680
[ 1107.337768][ C1] ? blkpg_ioctl+0xa90/0xa90
[ 1107.337772][ C1] ? udf_process_sequence+0x35d0/0x35d0
[ 1107.337776][ C1] ? submit_bio+0xba/0x480
[ 1107.337780][ C1] udf_scan_anchors+0x3f4/0x680
[ 1107.337784][ C1] ? udf_check_anchor_block+0x680/0x680
[ 1107.337789][ C1] ? __sanitizer_cov_trace_const_cmp8+0x18/0x20
[ 1107.337793][ C1] ? udf_get_last_session+0x120/0x120
[ 1107.337797][ C1] udf_load_vrs+0x67f/0xc80
[ 1107.337801][ C1] ? udf_scan_anchors+0x680/0x680
[ 1107.337805][ C1] ? udf_bread+0x260/0x260
[ 1107.337809][ C1] ? lockdep_init_map+0x1be/0x6d0
[ 1107.337813][ C1] udf_fill_super+0x7d8/0x16d1
[ 1107.337817][ C1] ? udf_load_vrs+0xc80/0xc80
[ 1107.337820][ C1] ? vsprintf+0x40/0x40
[ 1107.337824][ C1] ? set_blocksize+0x2bf/0x340
[ 1107.337829][ C1] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 1107.337833][ C1] mount_bdev+0x307/0x3c0
[ 1107.337837][ C1] ? udf_load_vrs+0xc80/0xc80
[ 1107.337840][ C1] udf_mount+0x35/0x40
[ 1107.337844][ C1] ? udf_get_pblock_meta25+0x3a0/0x3a0
[ 1107.337848][ C1] legacy_get_tree+0xf2/0x200
[ 1107.337853][ C1] ? __sanitizer_cov_trace_const_cmp4+0x16/0x20
[ 1107.337857][ C1] vfs_get_tree+0x123/0x450
[ 1107.337860][ C1] do_mount+0x1436/0x2c40
[ 1107.337864][ C1] ? copy_mount_string+0x40/0x40
[ 1107.337868][ C1] ? _copy_from_user+0xdd/0x150
[ 1107.337873][ C1] ? __sanitizer_cov_trace_const_cmp8+0x18/0x20
[ 1107.337877][ C1] ? copy_mount_options+0x280/0x3a0
[ 1107.337881][ C1] ksys_mount+0xdb/0x150
[ 1107.337885][ C1] __x64_sys_mount+0xbe/0x150
[ 1107.337889][ C1] do_syscall_64+0x103/0x610
[ 1107.337893][ C1] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 1107.337897][ C1] RIP: 0033:0x441a49
[ 1107.337909][ C1] Code: ad 07 fc ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 7b 07 fc ff c3 66 2e 0f 1f 84 00 00 00 00
[ 1107.337913][ C1] RSP: 002b:00007ffcabed5048 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
[ 1107.337923][ C1] RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000441a49
[ 1107.337929][ C1] RDX: 0000000020000140 RSI: 0000000020000080 RDI: 0000000020000000
[ 1107.337934][ C1] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
[ 1107.337940][ C1] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
[ 1107.337946][ C1] R13: 00000000004026e0 R14: 0000000000000000 R15: 0000000000000000
[ 1107.339387][ T1041] Kernel panic - not syncing: hung_task: blocked tasks
[ 1107.787589][ T1041] CPU: 0 PID: 1041 Comm: khungtaskd Not tainted 5.1.0-rc6+ #89
[ 1107.795319][ T1041] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
[ 1107.806082][ T1041] Call Trace:
[ 1107.809403][ T1041] dump_stack+0x172/0x1f0
[ 1107.813776][ T1041] panic+0x2cb/0x65c
[ 1107.817695][ T1041] ? __warn_printk+0xf3/0xf3
[ 1107.822391][ T1041] ? lapic_can_unplug_cpu.cold+0x38/0x38
[ 1107.828236][ T1041] ? ___preempt_schedule+0x16/0x18
[ 1107.833386][ T1041] ? nmi_trigger_cpumask_backtrace+0x19e/0x236
[ 1107.839739][ T1041] ? nmi_trigger_cpumask_backtrace+0x1fa/0x236
[ 1107.846024][ T1041] ? nmi_trigger_cpumask_backtrace+0x204/0x236
[ 1107.852196][ T1041] ? nmi_trigger_cpumask_backtrace+0x19e/0x236
[ 1107.858482][ T1041] watchdog+0x9c8/0xec0
[ 1107.862858][ T1041] kthread+0x357/0x430
[ 1107.866943][ T1041] ? reset_hung_task_detector+0x30/0x30
[ 1107.872522][ T1041] ? kthread_cancel_delayed_work_sync+0x20/0x20
[ 1107.878799][ T1041] ret_from_fork+0x3a/0x50
[ 1107.884924][ T1041] Kernel Offset: disabled
[ 1107.889301][ T1041] Rebooting in 86400 seconds..
----------------------------------------

I don't know whether "it is not safe to print locks held by TASK_RUNNING threads"
remains true. But since a thread's state can change at any moment, there is no
guarantee that only locks held by !TASK_RUNNING threads will be printed by
debug_show_all_locks(), I guess that allow printing all locks at their own risk
using some kernel config is fine...

Also, we could replace trigger_all_cpu_backtrace() with a new function which scans
all threads and dumps threads with ->on_cpu == 1 so that the output comes from
the same printk() context.

Dmitry Vyukov

unread,
Apr 29, 2019, 1:30:16 AM4/29/19
to Al Viro, syzbot, Jens Axboe, Jan Kara, linux-fsdevel, LKML, Tetsuo Handa, syzkaller-bugs, Peter Zijlstra
How useful would it be to see full stacks in such lockdep reports?
Now that we have lib/stackdepot.c that is capable of memorizing large
number of stacks and converting them to a single u32, we could use it
in more debug facilities. I remember +Peter mentioned some problems
with interrupts/reenterancy of stackdepot, but I hope it's resolvable
(at least in some conservative way as we already call stackdepot from
interrupts).
I think ODEBUG facility have the same problem of showing only single
PC in reports for a past stack.
Should I file an issue for this?

syzbot

unread,
Apr 29, 2019, 9:10:02 AM4/29/19
to penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer still triggered
crash:
no output from test machine



Tested on:

commit: 37624b58 Linux 5.1-rc7
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=17ff10dca00000
kernel config: https://syzkaller.appspot.com/x/.config?x=ef1b87b455c397cf
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
patch: https://syzkaller.appspot.com/x/patch.diff?x=101785a8a00000

Tetsuo Handa

unread,
Apr 29, 2019, 9:18:57 AM4/29/19
to Dmitry Vyukov, syzkall...@googlegroups.com
Hello.

The test should have been repeated until at least uptime > 143 seconds.
Why prematurely judged as "no output" as of uptime = 64.772989 seconds ?
Just a temporary infra problem?

Dmitry Vyukov

unread,
Apr 29, 2019, 9:32:02 AM4/29/19
to Tetsuo Handa, syzkaller-bugs
On Mon, Apr 29, 2019 at 3:18 PM Tetsuo Handa
<penguin...@i-love.sakura.ne.jp> wrote:
>
> Hello.
>
> The test should have been repeated until at least uptime > 143 seconds.
> Why prematurely judged as "no output" as of uptime = 64.772989 seconds ?
> Just a temporary infra problem?

It should have been run until uptime 363 seconds. It's just there were
literally no output. 64 is just the last time when kernel printed
something, not when the "no output" condition was detected.

syzbot

unread,
Apr 29, 2019, 6:41:01 PM4/29/19
to penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer still triggered
crash:
no output from test machine



Tested on:

commit: 83a50840 Merge tag 'seccomp-v5.1-rc8' of git://git.kernel...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=129340ff200000
kernel config: https://syzkaller.appspot.com/x/.config?x=ef1b87b455c397cf
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
patch: https://syzkaller.appspot.com/x/patch.diff?x=10f00cdca00000

Tetsuo Handa

unread,
Apr 29, 2019, 7:03:26 PM4/29/19
to Dmitry Vyukov, syzkaller-bugs
On 2019/04/29 22:31, Dmitry Vyukov wrote:
> On Mon, Apr 29, 2019 at 3:18 PM Tetsuo Handa
> <penguin...@i-love.sakura.ne.jp> wrote:
>>
>> Hello.
>>
>> The test should have been repeated until at least uptime > 143 seconds.
>> Why prematurely judged as "no output" as of uptime = 64.772989 seconds ?
>> Just a temporary infra problem?
>
> It should have been run until uptime 363 seconds. It's just there were
> literally no output. 64 is just the last time when kernel printed
> something, not when the "no output" condition was detected.

Indeed, with ping printk(), it did run until uptime 369 seconds.
https://syzkaller.appspot.com/x/log.txt?x=129340ff200000

2019/04/29 22:35:33 parsed 1 programs
2019/04/29 22:35:34 executed programs: 0
2019/04/29 22:35:39 executed programs: 12

Something made the reproducer unable to repeat... Is it possible to
make syzbot try "echo t > /proc/sysrq-trigger" before giving up?
Maybe we could do it from kernel side if we track last printk() time...

Jan Kara

unread,
Apr 29, 2019, 10:55:06 PM4/29/19
to Al Viro, syzbot, ax...@kernel.dk, dvy...@google.com, ja...@suse.cz, linux-...@vger.kernel.org, linux-...@vger.kernel.org, penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Actually UDF also calls CDROMMULTISESSION ioctl during mount. So I could
see how we get to lo_simple_ioctl() and indeed that would acquire
loop_ctl_mutex under s_umount which is the other way around than in
BLKRRPART ioctl.

> Why do we have loop_func_table->ioctl(), BTW? All in-tree instances are
> either NULL or return -EINVAL unconditionally. Considering that the
> caller is
> err = lo->ioctl ? lo->ioctl(lo, cmd, arg) : -EINVAL;
> we could bloody well just get rid of cryptoloop_ioctl() (the only
> non-NULL instance) and get rid of calling lo_simple_ioctl() in
> lo_ioctl() switch's default.

Yeah, you're right. And if we push the patch a bit further to not take
loop_ctl_mutex for invalid ioctl number, that would fix the problem. I
can send a fix.

Honza
--
Jan Kara <ja...@suse.com>
SUSE Labs, CR

Al Viro

unread,
Apr 29, 2019, 11:11:51 PM4/29/19
to Jan Kara, syzbot, ax...@kernel.dk, dvy...@google.com, linux-...@vger.kernel.org, linux-...@vger.kernel.org, penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
On Tue, Apr 30, 2019 at 04:55:01AM +0200, Jan Kara wrote:

> Yeah, you're right. And if we push the patch a bit further to not take
> loop_ctl_mutex for invalid ioctl number, that would fix the problem. I
> can send a fix.

Huh? We don't take it until in lo_simple_ioctl(), and that patch doesn't
get to its call on invalid ioctl numbers. What am I missing here?

syzbot

unread,
Apr 30, 2019, 6:35:01 AM4/30/19
to penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer still triggered
crash:
no output from test machine



Tested on:

commit: 3d17a1de Add linux-next specific files for 20190429
git tree:
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
console output: https://syzkaller.appspot.com/x/log.txt?x=16035198a00000
kernel config: https://syzkaller.appspot.com/x/.config?x=a5de954500ed36f7
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
patch: https://syzkaller.appspot.com/x/patch.diff?x=115e7b60a00000

Jan Kara

unread,
Apr 30, 2019, 9:07:44 AM4/30/19
to Al Viro, Jan Kara, syzbot, ax...@kernel.dk, dvy...@google.com, linux-...@vger.kernel.org, linux-...@vger.kernel.org, penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Doesn't it? blkdev_ioctl() calls into __blkdev_driver_ioctl() for
unrecognized ioctl numbers. That calls into lo_ioctl() in case of a loop
device. lo_ioctl() calls into lo_simple_ioctl() for ioctl numbers it
doesn't recognize and lo_simple_ioctl() will lock loop_ctl_mutex as you
say.

Honza

Al Viro

unread,
Apr 30, 2019, 9:18:30 AM4/30/19
to Jan Kara, syzbot, ax...@kernel.dk, dvy...@google.com, linux-...@vger.kernel.org, linux-...@vger.kernel.org, penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Not with the patch upthread. lo_ioctl() part was

@@ -1567,10 +1564,9 @@ static int lo_ioctl(struct block_device *bdev, fmode_t mode,
case LOOP_SET_BLOCK_SIZE:
if (!(mode & FMODE_WRITE) && !capable(CAP_SYS_ADMIN))
return -EPERM;
- /* Fall through */
+ return lo_simple_ioctl(lo, cmd, arg);
default:
- err = lo_simple_ioctl(lo, cmd, arg);
- break;
+ return -EINVAL;
}

return err;

so anything unrecognized doesn't make it to lo_simple_ioctl() at all.

Jan Kara

unread,
Apr 30, 2019, 11:07:58 AM4/30/19
to Al Viro, Jan Kara, syzbot, ax...@kernel.dk, dvy...@google.com, linux-...@vger.kernel.org, linux-...@vger.kernel.org, penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Ah, right. I've missed that in your patch. So your patch should be really
fixing the problem. Will you post it officially? Thanks!

Tetsuo Handa

unread,
Apr 30, 2019, 11:35:10 AM4/30/19
to Jan Kara, Al Viro, syzbot, ax...@kernel.dk, dvy...@google.com, linux-...@vger.kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com
On 2019/05/01 0:07, Jan Kara wrote:
> Ah, right. I've missed that in your patch. So your patch should be really
> fixing the problem. Will you post it officially? Thanks!

I still cannot understand what the problem is.
According to console output,

----------
INFO: task syz-executor274:8097 blocked for more than 143 seconds.
INFO: task blkid:8099 blocked for more than 143 seconds.

1 lock held by syz-executor274/8083:
2 locks held by syz-executor274/8097:
#0: 000000007a5ed526 (&bdev->bd_mutex){+.+.}, at: blkdev_reread_part+0x1f/0x40 block/ioctl.c:192
#1: 0000000067606e21 (&type->s_umount_key#39){.+.+}, at: __get_super.part.0+0x203/0x2e0 fs/super.c:788
1 lock held by blkid/8099:
#0: 000000007a5ed526 (&bdev->bd_mutex){+.+.}, at: blkdev_put+0x34/0x560 fs/block_dev.c:1866
----------

8099 was blocked for too long waiting for 000000007a5ed526 held by 8097.
8097 was blocked for too long waiting for 0000000067606e21 held by somebody.
Since there is nobody else holding 0000000067606e21,
I guessed that the "somebody" which is holding 0000000067606e21 is 8083.

----------
----------

8083 is doing mount(2) but is not holding 00000000bde6230e (loop_ctl_mutex).
I guessed that something went wrong with 8083 inside __getblk_gfp().
How can loop_ctl_mutex be relevant to this problem?

Tetsuo Handa

unread,
May 3, 2019, 6:29:57 AM5/3/19
to Jan Kara, Al Viro, syzbot, ax...@kernel.dk, dvy...@google.com, linux-...@vger.kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com
On 2019/05/01 0:34, Tetsuo Handa wrote:
> I still cannot understand what the problem is.
(...snipped...)
> I guessed that something went wrong with 8083 inside __getblk_gfp().
> How can loop_ctl_mutex be relevant to this problem?
>

syzbot got similar NMI backtrace. No loop_ctl_mutex is involved.

INFO: task hung in mount_bdev (2)
https://syzkaller.appspot.com/bug?id=d9b9fa1428ff2466de64fc85256e769f516cea58

Dmitry Vyukov

unread,
May 8, 2019, 7:16:44 AM5/8/19
to Tetsuo Handa, syzkaller-bugs, syzkaller
From: Tetsuo Handa <penguin...@i-love.sakura.ne.jp>
Date: Tue, Apr 30, 2019 at 1:03 AM
To: Dmitry Vyukov
Cc: syzkaller-bugs
Will it help to debug any bugs? What percent of the bugs? I am not
sure it will help to debug even this one.
You may try to submit a patch that dumps all task after 4 minutes or so.

Tetsuo Handa

unread,
May 8, 2019, 9:18:04 AM5/8/19
to Dmitry Vyukov, syzkaller-bugs, syzkaller
My concern is not number / percentage of bugs. My concern is how to collect
information for guessing where the problem is when unexpected silence occurred.

Like explained at https://akari.osdn.jp/capturing-kernel-messages.html ,
it is important to know whether situation changed over time. Roughly speaking,
my interested information is the output of /usr/bin/top (in order to know
which thread was busy or sleeping) and backtrace of such threads. But since
we can't count on userspace tools for getting information, triggering SysRq
just before giving up is a shorthand alternative for such purpose.
(Although /usr/bin/top might be already available in the image files,
currently the output stream would not allow intermixing the output of
/usr/bin/top because the console output is either kernel messages or
syzkaller messages.)

Dmitry Vyukov

unread,
May 8, 2019, 9:37:36 AM5/8/19
to Tetsuo Handa, syzkaller-bugs, syzkaller
From: Tetsuo Handa <penguin...@i-love.sakura.ne.jp>
Date: Wed, May 8, 2019 at 3:18 PM
To: Dmitry Vyukov
Cc: syzkaller-bugs, syzkaller
But if it's not useful for debugging of any bugs, we should not add
it.Every feature should cover some sensible percent of bugs, otherwise
we will just continue adding hundreds of such features every month and
they won't be helpful and soon they will start being harmful because
there will be too much output. There are some commonly useful info,
and there is infinite tail of custom debugging relevant to a single
bugs (or maybe not relevant even for a single bug, e.g. "we tried this
but it did not help to debug the bug"). For the infinite tail of
one-off things, it's more reasonable either to debug locally or to
submit custom patches for testing.

What reproducer did you use to get
https://syzkaller.appspot.com/x/log.txt?x=129340ff200000 ?
It would be useful to understand what happens there and make kernel
self-diagnose this and give the relevant info. Kernel not
self-diagnosing the problem seems to be the root cause of the hard
debugging. Improving kernel would benefit all kernel testing and
users, rather then just syzbot.
If the kernel is not healthy it may also not be possible to send a
sysrq from the userspace. A kernel watchdog for the bad condition
would be more reliable.

Dmitry Vyukov

unread,
May 8, 2019, 9:51:13 AM5/8/19
to Tetsuo Handa, syzkaller-bugs, syzkaller
From: Dmitry Vyukov <dvy...@google.com>
Date: Wed, May 8, 2019 at 3:37 PM
To: Tetsuo Handa
Cc: syzkaller-bugs, syzkaller
Is it possible to send sysrq's over console?
The part running on the test machine does not do crash detection and
does not know that the machine underneath is bad (generally not
possible). A host machine detects crashes, but then the question how
it can interact with the test machine when it's suspected to be bad.

Tetsuo Handa

unread,
May 8, 2019, 10:54:59 AM5/8/19
to Dmitry Vyukov, syzkaller-bugs, syzkaller
On 2019/05/08 22:51, Dmitry Vyukov wrote:
> What reproducer did you use to get
> https://syzkaller.appspot.com/x/log.txt?x=129340ff200000 ?

I guess the reproducer syzbot has found. That is,
https://syzkaller.appspot.com/text?tag=ReproC&x=135385a8a00000
if "#syz test:" uses C reproducer and
https://syzkaller.appspot.com/text?tag=ReproSyz&x=1291b1f4a00000
if "#syz test:" uses syz reproducer.

Initially I tried "#syz test:" using
https://syzkaller.appspot.com/x/patch.diff?x=101785a8a00000 in order to test whether
https://lkml.kernel.org/r/39601316-2a59-bbd7...@i-love.sakura.ne.jp
can improve the output. But that "#syz test:" request failed to trigger khungtaskd
messages. Since the last printk() was so early, I retried "#syz test:" using
https://syzkaller.appspot.com/x/patch.diff?x=10f00cdca00000 in order to confirm
that the "#syz test:" request gave up after actually waiting for 300 seconds.

> Is it possible to send sysrq's over console?

Yes. Executing "echo t > /proc/sysrq-trigger" from shell will return
SysRq-t output via printk(). That's why I want syzbot to try it when
syzbot gives up due to "unexpected silence".

> The part running on the test machine does not do crash detection and
> does not know that the machine underneath is bad (generally not
> possible). A host machine detects crashes, but then the question how
> it can interact with the test machine when it's suspected to be bad.

Since "printk: kernel is alive." was printed, the kernel was healthy and
console loglevel was not changed. Thus, SysRq-t etc. would tell us why
these "#syz test:" request are resulting in "no output from test machine"
(I mean, "unexpected silence") instead of trigger khungtaskd messages.

Dmitry Vyukov

unread,
May 8, 2019, 11:24:01 AM5/8/19
to Tetsuo Handa, syzkaller-bugs, syzkaller
From: Tetsuo Handa <penguin...@i-love.sakura.ne.jp>
Date: Wed, May 8, 2019 at 4:54 PM
To: Dmitry Vyukov
Cc: syzkaller-bugs, syzkaller

> On 2019/05/08 22:51, Dmitry Vyukov wrote:
> > What reproducer did you use to get
> > https://syzkaller.appspot.com/x/log.txt?x=129340ff200000 ?
>
> I guess the reproducer syzbot has found. That is,
> https://syzkaller.appspot.com/text?tag=ReproC&x=135385a8a00000
> if "#syz test:" uses C reproducer and
> https://syzkaller.appspot.com/text?tag=ReproSyz&x=1291b1f4a00000
> if "#syz test:" uses syz reproducer.


That bug is reliably detected as "task hung":
https://syzkaller.appspot.com/bug?extid=de8b966f09b354eef8dd
I've tried locally with the same result.
There is a proper kernel bug report that already dumps all tasks.
If it did not produce a task list as a one-off flake, one may just
re-run testing to get the task list.



> Initially I tried "#syz test:" using
> https://syzkaller.appspot.com/x/patch.diff?x=101785a8a00000 in order to test whether
> https://lkml.kernel.org/r/39601316-2a59-bbd7...@i-love.sakura.ne.jp
> can improve the output. But that "#syz test:" request failed to trigger khungtaskd
> messages. Since the last printk() was so early, I retried "#syz test:" using
> https://syzkaller.appspot.com/x/patch.diff?x=10f00cdca00000 in order to confirm
> that the "#syz test:" request gave up after actually waiting for 300 seconds.
>
> > Is it possible to send sysrq's over console?
>
> Yes. Executing "echo t > /proc/sysrq-trigger" from shell will return
> SysRq-t output via printk(). That's why I want syzbot to try it when
> syzbot gives up due to "unexpected silence".
>
> > The part running on the test machine does not do crash detection and
> > does not know that the machine underneath is bad (generally not
> > possible). A host machine detects crashes, but then the question how
> > it can interact with the test machine when it's suspected to be bad.
>
> Since "printk: kernel is alive." was printed, the kernel was healthy and
> console loglevel was not changed. Thus, SysRq-t etc. would tell us why
> these "#syz test:" request are resulting in "no output from test machine"
> (I mean, "unexpected silence") instead of trigger khungtaskd messages.


I mean literally over console connection without involving shell and
start of multiple new processes. That may not work. The kernel wasn't
healthy, even if it was able to print something. There is hang task.

Tetsuo Handa

unread,
May 8, 2019, 1:00:13 PM5/8/19
to Dmitry Vyukov, syzkaller-bugs, syzkaller
On 2019/05/09 0:23, Dmitry Vyukov wrote:
> From: Tetsuo Handa <penguin...@i-love.sakura.ne.jp>
> Date: Wed, May 8, 2019 at 4:54 PM
> To: Dmitry Vyukov
> Cc: syzkaller-bugs, syzkaller
>
>> On 2019/05/08 22:51, Dmitry Vyukov wrote:
>>> What reproducer did you use to get
>>> https://syzkaller.appspot.com/x/log.txt?x=129340ff200000 ?
>>
>> I guess the reproducer syzbot has found. That is,
>> https://syzkaller.appspot.com/text?tag=ReproC&x=135385a8a00000
>> if "#syz test:" uses C reproducer and
>> https://syzkaller.appspot.com/text?tag=ReproSyz&x=1291b1f4a00000
>> if "#syz test:" uses syz reproducer.
>
>
> That bug is reliably detected as "task hung":
> https://syzkaller.appspot.com/bug?extid=de8b966f09b354eef8dd

But "#syz test:" failed to detect as "task hung" two times out of two trials.
The reproducer for this bug which "#syz test:" is using seems unreliable.

> I've tried locally with the same result.
> There is a proper kernel bug report that already dumps all tasks.

Like mentioned at https://lkml.kernel.org/r/39601316-2a59-bbd7...@i-love.sakura.ne.jp ,
one task is missing in the report file. That task is only in the console
output file where developers won't look at.

> If it did not produce a task list as a one-off flake, one may just
> re-run testing to get the task list.

OK, I will try "#syz test:" using
https://lkml.kernel.org/r/39601316-2a59-bbd7...@i-love.sakura.ne.jp
for several more times.





> I mean literally over console connection without involving shell and
> start of multiple new processes. That may not work. The kernel wasn't
> healthy, even if it was able to print something. There is hang task.

Why can't we add a process which does

int fd = open("/proc/sysrq-trigger", O_WRONLY);
wait_for_trigger();
write(fd, "t", 1);
close(fd);

which was launched from ssh connection? "may not work" can't be a reason.
We can give up if connection was unexpectedly lost. But unless connection
was unexpectedly lost, we can try to tell the already started process
which is sleeping at wait_for_trigger() to resume. If the kernel was not
healthy (e.g. already panic()ed, console loglevel was set to
CONSOLE_LOGLEVEL_SILENT, all CPUs are busy looping inside kernel),
we just don't see the SysRq output. We just fail to get hints when
"unexpected silence" happened; that's tolerable.

... There is no ssh connection before starting the fuzzing test!?
Then, how can the process which starts fuzzing tests be launched?

syzbot

unread,
May 9, 2019, 9:43:01 AM5/9/19
to penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer still triggered
crash:
no output from test machine



Tested on:

commit: ea986679 Merge git://git.kernel.org/pub/scm/linux/kernel/g..
git tree:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
console output: https://syzkaller.appspot.com/x/log.txt?x=137431f0a00000
kernel config: https://syzkaller.appspot.com/x/.config?x=2bd0da4b8de0b004
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
patch: https://syzkaller.appspot.com/x/patch.diff?x=12a8153ca00000

syzbot

unread,
May 9, 2019, 10:09:01 AM5/9/19
to penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer still triggered
crash:
no output from test machine



Tested on:

commit: ea986679 Merge git://git.kernel.org/pub/scm/linux/kernel/g..
git tree:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
console output: https://syzkaller.appspot.com/x/log.txt?x=12922222a00000
kernel config: https://syzkaller.appspot.com/x/.config?x=2bd0da4b8de0b004
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
patch: https://syzkaller.appspot.com/x/patch.diff?x=17337aaca00000

syzbot

unread,
May 9, 2019, 10:34:01 AM5/9/19
to penguin...@i-love.sakura.ne.jp, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer still triggered
crash:
no output from test machine



Tested on:

commit: ea986679 Merge git://git.kernel.org/pub/scm/linux/kernel/g..
git tree:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
console output: https://syzkaller.appspot.com/x/log.txt?x=159e41f0a00000
kernel config: https://syzkaller.appspot.com/x/.config?x=2bd0da4b8de0b004
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
patch: https://syzkaller.appspot.com/x/patch.diff?x=1410b5d0a00000

Dmitry Vyukov

unread,
May 9, 2019, 10:53:55 AM5/9/19
to Tetsuo Handa, syzkaller-bugs, syzkaller, syzbot
> On 2019/05/09 0:23, Dmitry Vyukov wrote:
> > From: Tetsuo Handa <