INFO: task hung in pipe_read

7 views
Skip to first unread message

syzbot

unread,
Oct 10, 2018, 12:35:04 AM10/10/18
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 64c5e530ac2c Merge tag 'arc-4.19-rc8' of git://git.kernel...
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=10061041400000
kernel config: https://syzkaller.appspot.com/x/.config?x=88e9a8a39dc0be2d
dashboard link: https://syzkaller.appspot.com/bug?extid=dcb1af53332a24f4fc49
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
CC: [linux-...@vger.kernel.org linux-...@vger.kernel.org
vi...@zeniv.linux.org.uk]

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+dcb1af...@syzkaller.appspotmail.com

team0 (unregistering): Port device team_slave_0 removed
bond0 (unregistering): Releasing backup interface bond_slave_1
bond0 (unregistering): Releasing backup interface bond_slave_0
bond0 (unregistering): Released all slaves
ip6_tunnel: ip6gre1 xmit: Local address not yet configured!
INFO: task syz-executor0:1474 blocked for more than 140 seconds.
Not tainted 4.19.0-rc7+ #54
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor0 D25400 1474 5890 0x00000004
Call Trace:
context_switch kernel/sched/core.c:2825 [inline]
__schedule+0x86c/0x1ed0 kernel/sched/core.c:3473
schedule+0xfe/0x460 kernel/sched/core.c:3517
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:3575
__mutex_lock_common kernel/locking/mutex.c:1002 [inline]
__mutex_lock+0xbe7/0x1700 kernel/locking/mutex.c:1072
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
__pipe_lock fs/pipe.c:83 [inline]
pipe_read+0xc9/0x940 fs/pipe.c:260
call_read_iter include/linux/fs.h:1802 [inline]
new_sync_read fs/read_write.c:406 [inline]
__vfs_read+0x6ac/0x9b0 fs/read_write.c:418
vfs_read+0x17f/0x3c0 fs/read_write.c:452
ksys_read+0x101/0x260 fs/read_write.c:578
__do_sys_read fs/read_write.c:588 [inline]
__se_sys_read fs/read_write.c:586 [inline]
__x64_sys_read+0x73/0xb0 fs/read_write.c:586
do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x457579
Code: Bad RIP value.
RSP: 002b:00007f368d3cec78 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000457579
RDX: 0000000000066000 RSI: 0000000020000200 RDI: 0000000000000003
RBP: 000000000072bfa0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f368d3cf6d4
R13: 00000000004c2617 R14: 00000000004d4c80 R15: 00000000ffffffff
INFO: lockdep is turned off.
NMI backtrace for cpu 0
CPU: 0 PID: 983 Comm: khungtaskd Not tainted 4.19.0-rc7+ #54
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1c4/0x2b4 lib/dump_stack.c:113
nmi_cpu_backtrace.cold.3+0x63/0xa2 lib/nmi_backtrace.c:101
nmi_trigger_cpumask_backtrace+0x1b3/0x1ed lib/nmi_backtrace.c:62
arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
trigger_all_cpu_backtrace include/linux/nmi.h:144 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:204 [inline]
watchdog+0xb3e/0x1050 kernel/hung_task.c:265
kthread+0x35a/0x420 kernel/kthread.c:246
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:413
Sending NMI from CPU 0 to CPUs 1:
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 2.876
msecs
NMI backtrace for cpu 1
CPU: 1 PID: 1469 Comm: syz-executor0 Not tainted 4.19.0-rc7+ #54
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
RIP: 0010:rb_erase+0x8a5/0x3710 lib/rbtree.c:461
Code: 48 89 ce 4c 89 2b 49 83 cc 01 48 b8 00 00 00 00 00 fc ff df 48 c1 ee
03 80 3c 06 00 0f 85 be 22 00 00 4c 89 21 e9 6a fe ff ff <49> 8d 8e 40 f7
ff ff 48 89 cf 48 c1 ef 03 c6 04 1f 00 4c 89 c7 48
RSP: 0018:ffff8800bb60c220 EFLAGS: 00000246
RAX: ffff8801d16a9460 RBX: dffffc0000000000 RCX: 0000000000000000
RDX: 1ffff100176c184c RSI: ffffed00176c18a0 RDI: ffff8801d16a8e68
RBP: ffff8800bb60cc28 R08: ffff8801d16a9068 R09: ffff8801d16a9870
R10: ffffed003a28643c R11: ffff8801d14321e3 R12: ffff8801d16a9060
R13: ffff8801d1432180 R14: ffff8800bb60cc00 R15: ffff8801d16a9860
FS: 00007f368d3f0700(0000) GS:ffff8801daf00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020010000 CR3: 000000019055f000 CR4: 00000000001426e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
elv_rb_del+0x57/0xa0 block/elevator.c:333
deadline_del_rq_rb block/mq-deadline.c:102 [inline]
deadline_remove_request+0x374/0x640 block/mq-deadline.c:118
deadline_move_request block/mq-deadline.c:175 [inline]
__dd_dispatch_request block/mq-deadline.c:364 [inline]
dd_dispatch_request+0x91b/0xd40 block/mq-deadline.c:386
blk_mq_do_dispatch_sched+0x39e/0x580 block/blk-mq-sched.c:95
blk_mq_sched_dispatch_requests+0x62d/0x9b0 block/blk-mq-sched.c:203
__blk_mq_run_hw_queue+0x199/0x260 block/blk-mq.c:1297
__blk_mq_delay_run_hw_queue+0x4f9/0x5b0 block/blk-mq.c:1365
blk_mq_run_hw_queue+0x1cd/0x390 block/blk-mq.c:1402
blk_mq_sched_insert_requests+0x31e/0x460 block/blk-mq-sched.c:422
blk_mq_flush_plug_list+0xb38/0x1230 block/blk-mq.c:1652
blk_flush_plug_list+0x1c7/0x990 block/blk-core.c:3695
blk_mq_make_request+0x1443/0x2630 block/blk-mq.c:1880
generic_make_request+0x9be/0x15a0 block/blk-core.c:2458
submit_bio+0xba/0x460 block/blk-core.c:2566
? __sanitizer_cov_trace_const_
Lost 151 message(s)!


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#bug-status-tracking for how to communicate with
syzbot.

syzbot

unread,
May 15, 2019, 8:28:02 AM5/15/19
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages