INFO: task hung in lo_open (2)

6 views
Skip to first unread message

syzbot

unread,
Mar 31, 2018, 4:47:10 PM3/31/18
to syzkaller-upst...@googlegroups.com
Hello,

syzbot hit the following crash on upstream commit
b5dbc28762fd3fd40ba76303be0c7f707826f982 (Sat Mar 31 04:53:57 2018 +0000)
Merge tag 'kbuild-fixes-v4.16-3' of
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
syzbot dashboard link:
https://syzkaller.appspot.com/bug?extid=712350eead2a19b102e9

So far this crash happened 2 times on upstream.
Unfortunately, I don't have any reproducer for this crash yet.
Raw console output:
https://syzkaller.appspot.com/x/log.txt?id=4782288168550400
Kernel config:
https://syzkaller.appspot.com/x/.config?id=-2760467897697295172
compiler: gcc (GCC) 7.1.1 20170620
CC: [ax...@kernel.dk ha...@suse.de linux-...@vger.kernel.org
ming...@redhat.com osa...@fb.com sh...@fb.com]

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+712350...@syzkaller.appspotmail.com
It will help syzbot understand when the bug is fixed. See footer for
details.
If you forward the report, please keep this part and the footer.

QAT: Invalid ioctl
INFO: task syz-executor0:4519 blocked for more than 120 seconds.
Not tainted 4.16.0-rc7+ #8
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor0 D18288 4519 1 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2862 [inline]
__schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440
schedule+0xf5/0x430 kernel/sched/core.c:3499
schedule_preempt_disabled+0x10/0x20 kernel/sched/core.c:3557
__mutex_lock_common kernel/locking/mutex.c:833 [inline]
__mutex_lock+0xaad/0x1a80 kernel/locking/mutex.c:893
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
lo_open+0x1b/0xa0 drivers/block/loop.c:1571
__blkdev_get+0xd51/0x13b0 fs/block_dev.c:1535
blkdev_get+0x399/0xb00 fs/block_dev.c:1609
blkdev_open+0x1c9/0x250 fs/block_dev.c:1767
do_dentry_open+0x667/0xd40 fs/open.c:752
vfs_open+0x107/0x220 fs/open.c:866
do_last fs/namei.c:3379 [inline]
path_openat+0x1151/0x3530 fs/namei.c:3519
do_filp_open+0x25b/0x3b0 fs/namei.c:3554
do_sys_open+0x502/0x6d0 fs/open.c:1059
SYSC_open fs/open.c:1077 [inline]
SyS_open+0x2d/0x40 fs/open.c:1072
do_syscall_64+0x281/0x940 arch/x86/entry/common.c:287
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x40f0b0
RSP: 002b:00007ffcbf0d0428 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 000000000000012e RCX: 000000000040f0b0
RDX: 00007ffcbf0d11ea RSI: 0000000000000002 RDI: 00007ffcbf0d11e0
RBP: 00007ffcbf0d0450 R08: 0000000000000000 R09: 000000000000000a
R10: 0000000000000075 R11: 0000000000000246 R12: 0000000000000013
R13: 0000000000000000 R14: 00000000006fd6e0 R15: 0000000000001380

Showing all locks held in the system:
2 locks held by khungtaskd/876:
#0: (rcu_read_lock){....}, at: [<0000000033f0022a>]
check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline]
#0: (rcu_read_lock){....}, at: [<0000000033f0022a>] watchdog+0x1c5/0xd60
kernel/hung_task.c:249
#1: (tasklist_lock){.+.+}, at: [<00000000c4cf4a71>]
debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470
2 locks held by getty/4441:
#0: (&tty->ldisc_sem){++++}, at: [<0000000020c7fc70>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000071051a9d>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4442:
#0: (&tty->ldisc_sem){++++}, at: [<0000000020c7fc70>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000071051a9d>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4443:
#0: (&tty->ldisc_sem){++++}, at: [<0000000020c7fc70>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000071051a9d>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4444:
#0: (&tty->ldisc_sem){++++}, at: [<0000000020c7fc70>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000071051a9d>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4445:
#0: (&tty->ldisc_sem){++++}, at: [<0000000020c7fc70>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000071051a9d>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4446:
#0: (&tty->ldisc_sem){++++}, at: [<0000000020c7fc70>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000071051a9d>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4447:
#0: (&tty->ldisc_sem){++++}, at: [<0000000020c7fc70>]
ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
#1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000071051a9d>]
n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by syz-executor0/4519:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000821a22c0>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
#1: (loop_index_mutex){+.+.}, at: [<00000000f9cf261d>] lo_open+0x1b/0xa0
drivers/block/loop.c:1571
2 locks held by syz-executor4/4520:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000821a22c0>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
#1: (loop_index_mutex){+.+.}, at: [<00000000f9cf261d>] lo_open+0x1b/0xa0
drivers/block/loop.c:1571
2 locks held by syz-executor5/4524:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000821a22c0>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
#1: (loop_index_mutex){+.+.}, at: [<00000000f9cf261d>] lo_open+0x1b/0xa0
drivers/block/loop.c:1571
2 locks held by syz-executor7/15071:
#0: (loop_index_mutex){+.+.}, at: [<000000005cd1897c>]
loop_control_ioctl+0x89/0x490 drivers/block/loop.c:1938
#1: (&lo->lo_ctl_mutex#2){+.+.}, at: [<000000002658a480>]
loop_control_ioctl+0x1a7/0x490 drivers/block/loop.c:1952
2 locks held by blkid/15082:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000821a22c0>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
#1: (loop_index_mutex){+.+.}, at: [<00000000f9cf261d>] lo_open+0x1b/0xa0
drivers/block/loop.c:1571
2 locks held by syz-executor6/15084:
#0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000cc154b8d>]
lo_ioctl+0x8b/0x1b70 drivers/block/loop.c:1355
#1: (&bdev->bd_mutex){+.+.}, at: [<0000000058b7c5b5>]
blkdev_reread_part+0x1e/0x40 block/ioctl.c:192
1 lock held by syz-executor6/15107:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000821a22c0>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458
1 lock held by syz-executor6/15109:
#0: (&lo->lo_ctl_mutex/1){+.+.}, at: [<00000000cc154b8d>]
lo_ioctl+0x8b/0x1b70 drivers/block/loop.c:1355
1 lock held by blkid/15088:
#0: (&bdev->bd_mutex){+.+.}, at: [<00000000821a22c0>]
__blkdev_get+0x176/0x13b0 fs/block_dev.c:1458

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 876 Comm: khungtaskd Not tainted 4.16.0-rc7+ #8
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x194/0x24d lib/dump_stack.c:53
nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103
nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline]
check_hung_task kernel/hung_task.c:132 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline]
watchdog+0x90c/0xd60 kernel/hung_task.c:249
kthread+0x33c/0x400 kernel/kthread.c:238
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt+0x6/0x10
arch/x86/include/asm/irqflags.h:54


---
This bug is generated by a dumb bot. It may contain errors.
See https://goo.gl/tpsmEJ for details.
Direct all questions to syzk...@googlegroups.com.

syzbot will keep track of this bug report.
If you forgot to add the Reported-by tag, once the fix for this bug is
merged
into any tree, please reply to this email with:
#syz fix: exact-commit-title
To mark this as a duplicate of another syzbot report, please reply with:
#syz dup: exact-subject-of-another-report
If it's a one-off invalid bug report, please reply with:
#syz invalid
Note: if the crash happens again, it will cause creation of a new bug
report.
Note: all commands must start from beginning of the line in the email body.
To upstream this report, please reply with:
#syz upstream

syzbot

unread,
Mar 31, 2018, 4:47:13 PM3/31/18
to syzkaller-upst...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages