INFO: task hung in tty_read

4 views
Skip to first unread message

syzbot

unread,
Apr 14, 2019, 5:30:18 AM4/14/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: c3282d18 ANDROID: sched/debug: Make Energy Model read-only
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=16e7d533400000
kernel config: https://syzkaller.appspot.com/x/.config?x=13558268b29d9d4a
dashboard link: https://syzkaller.appspot.com/bug?extid=a9433cfa727c1ab9233b
compiler: gcc (GCC) 8.0.1 20180413 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+a9433c...@syzkaller.appspotmail.com

blk_update_request: 250 callbacks suppressed
blk_update_request: I/O error, dev loop0, sector 0
buffer_io_error: 250 callbacks suppressed
Buffer I/O error on dev loop0, logical block 0, lost async page write
INFO: task syz-executor0:4476 blocked for more than 140 seconds.
Not tainted 4.9.135+ #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor0 D29864 4476 2095 0x00000000
ffff8801a9eb4740 ffff8801aa089080 ffff8801aa089080 ffff8801a9eb17c0
ffff8801db621018 ffff8801c32ff950 ffffffff82806912 ffff880100000001
ffffffff00000000 fffffbfff0848da8 00a9865100000001 ffff8801db6218f0
Call Trace:
[<ffffffff82807e3f>] schedule+0x7f/0x1b0 kernel/sched/core.c:3553
[<ffffffff828135e5>] schedule_timeout+0x735/0xe20 kernel/time/timer.c:1771
[<ffffffff81d42f5c>] down_read_failed drivers/tty/tty_ldsem.c:241 [inline]
[<ffffffff81d42f5c>] __ldsem_down_read_nested+0x33c/0x610
drivers/tty/tty_ldsem.c:332
[<ffffffff82814c62>] ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
[<ffffffff81d3c8b5>] tty_ldisc_ref_wait+0x25/0x80
drivers/tty/tty_ldisc.c:275
[<ffffffff81d21e2a>] tty_read+0xfa/0x270 drivers/tty/tty_io.c:1084
[<ffffffff81507b15>] __vfs_read+0x115/0x560 fs/read_write.c:449
[<ffffffff8150a794>] vfs_read+0x124/0x390 fs/read_write.c:472
[<ffffffff8150e7f9>] SYSC_read fs/read_write.c:588 [inline]
[<ffffffff8150e7f9>] SyS_read+0xd9/0x1c0 fs/read_write.c:581
[<ffffffff810056ef>] do_syscall_64+0x19f/0x550 arch/x86/entry/common.c:285
[<ffffffff82816b93>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb

Showing all locks held in the system:
2 locks held by khungtaskd/24:
#0: (rcu_read_lock){......}, at: [<ffffffff8131bb4c>]
check_hung_uninterruptible_tasks kernel/hung_task.c:168 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8131bb4c>]
watchdog+0x11c/0xa20 kernel/hung_task.c:239
#1: (tasklist_lock){.+.?..}, at: [<ffffffff813fe314>]
debug_show_all_locks+0x79/0x218 kernel/locking/lockdep.c:4336
2 locks held by getty/2021:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82814c62>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff81d36fc2>]
n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
2 locks held by syz-executor0/4442:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82814c62>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff81d36fc2>]
n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
1 lock held by syz-executor0/4476:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82814c62>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
2 locks held by syz-executor5/9121:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82814c62>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff81d36fc2>]
n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
2 locks held by syz-executor5/11352:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82814c62>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+...}, at: [<ffffffff81d36fc2>]
n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
1 lock held by syz-executor5/16574:
#0: (&evdev->mutex){+.+.+.}, at: [<ffffffff82057602>]
evdev_ioctl_handler+0x112/0x1820 drivers/input/evdev.c:1293

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.9.135+ #1
ffff8801d9907d08 ffffffff81b42b89 0000000000000000 0000000000000001
0000000000000001 0000000000000001 ffffffff81098330 ffff8801d9907d40
ffffffff81b4dc99 0000000000000001 0000000000000000 0000000000000002
Call Trace:
[<ffffffff81b42b89>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81b42b89>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
[<ffffffff81b4dc99>] nmi_cpu_backtrace.cold.0+0x48/0x87
lib/nmi_backtrace.c:99
[<ffffffff81b4dc2c>] nmi_trigger_cpumask_backtrace+0x12c/0x151
lib/nmi_backtrace.c:60
[<ffffffff81098434>] arch_trigger_cpumask_backtrace+0x14/0x20
arch/x86/kernel/apic/hw_nmi.c:37
[<ffffffff8131c0dd>] trigger_all_cpu_backtrace include/linux/nmi.h:58
[inline]
[<ffffffff8131c0dd>] check_hung_task kernel/hung_task.c:125 [inline]
[<ffffffff8131c0dd>] check_hung_uninterruptible_tasks
kernel/hung_task.c:182 [inline]
[<ffffffff8131c0dd>] watchdog+0x6ad/0xa20 kernel/hung_task.c:239
[<ffffffff811428dd>] kthread+0x26d/0x300 kernel/kthread.c:211
[<ffffffff82816d5c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 2095 Comm: syz-executor0 Not tainted 4.9.135+ #1
task: ffff8801d26ddf00 task.stack: ffff8801afaf8000
RIP: 0010:[<ffffffff810a751a>] c [<ffffffff810a751a>]
native_set_pte_at+0x9a/0xe0 arch/x86/include/asm/pgtable.h:845
RSP: 0018:ffff8801afaff9d8 EFLAGS: 00000246
RAX: dffffc0000000000 RBX: 1ffff10035f5ff3c RCX: 1ffff10039228805
RDX: ffff8801c9144028 RSI: ffffed0035f5ff3c RDI: ffff8801afaffa00
RBP: ffff8801afaffa50 R08: ffff8801d26de870 R09: 25be02fcaa179f6a
R10: ffff8801d26ddf00 R11: 0000000000000001 R12: 80000001b1842007
R13: ffff8801d52d6028 R14: dead000000000100 R15: dffffc0000000000
FS: 000000000148e940(0000) GS:ffff8801db600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000a40021 CR3: 00000001d7e84000 CR4: 00000000001606b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Stack:
ffffffff81496920 c 0000000041b58ab3 c ffffffff82e28ca1 c ffffffff810a7480 c
0000000100000000 c 80000001b1842007 c ffff8801cfbff478 c 0000000000000001 c
ffff8801d52d6000 c df6f3daded5d780a c 80000001b1842007 c 0000000000000008 c
Call Trace:
[<ffffffff814a02c4>] set_pte_at arch/x86/include/asm/paravirt.h:470
[inline]
[<ffffffff814a02c4>] copy_one_pte mm/memory.c:911 [inline]
[<ffffffff814a02c4>] copy_pte_range mm/memory.c:954 [inline]
[<ffffffff814a02c4>] copy_pmd_range mm/memory.c:1004 [inline]
[<ffffffff814a02c4>] copy_pud_range mm/memory.c:1026 [inline]
[<ffffffff814a02c4>] copy_page_range+0xc04/0x17a0 mm/memory.c:1088
[<ffffffff810d72d3>] dup_mmap kernel/fork.c:674 [inline]
[<ffffffff810d72d3>] dup_mm kernel/fork.c:1156 [inline]
[<ffffffff810d72d3>] copy_mm kernel/fork.c:1210 [inline]
[<ffffffff810d72d3>] copy_process.part.8+0x44f3/0x6a10 kernel/fork.c:1692
[<ffffffff810d9c72>] copy_process kernel/fork.c:1505 [inline]
[<ffffffff810d9c72>] _do_fork+0x1b2/0xd30 kernel/fork.c:1972
[<ffffffff810da8c7>] SYSC_clone kernel/fork.c:2084 [inline]
[<ffffffff810da8c7>] SyS_clone+0x37/0x50 kernel/fork.c:2078
[<ffffffff810056ef>] do_syscall_64+0x19f/0x550 arch/x86/entry/common.c:285
[<ffffffff82816b93>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb
Code: c65 cb0 c48 cb8 c00 c00 c00 c00 c00 cfc cff cdf c48 cc1
ce9 c03 c80 c3c c01 c00 c75 c3c c48 cb8 c00 c00 c00 c00 c00
cfc cff cdf c4c c89 c22 c48 cc7 c04 c03 c00 c00 c00 c00
c<48> c8b c45 ce8 c65 c48 c33 c04 c25 c28 c00 c00 c00 c75
c2a c48 c83 cc4 c68 c5b c41 c


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Oct 1, 2019, 8:09:05 AM10/1/19
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages