INFO: task hung in mousedev_detach_client

7 views
Skip to first unread message

syzbot

unread,
Sep 15, 2019, 4:53:07 PM9/15/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 8fe42840 Merge 4.9.141 into android-4.9
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=15171e21600000
kernel config: https://syzkaller.appspot.com/x/.config?x=22a5ba9f73b6da1d
dashboard link: https://syzkaller.appspot.com/bug?extid=63876b61fbb0a9aa8f78
compiler: gcc (GCC) 8.0.1 20180413 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+63876b...@syzkaller.appspotmail.com

Free memory is -13944kB above reserved
lowmemorykiller: Killing 'syz-executor.4' (21630) (tgid 21624), adj 1000,
to free 35016kB on behalf of 'syz-fuzzer' (18711) because
cache 320kB is below limit 6144kB for oom_score_adj 0
Free memory is -13944kB above reserved
INFO: task syz-executor.1:5576 blocked for more than 140 seconds.
Not tainted 4.9.141+ #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.1 D28776 5576 2101 0x80000002
ffff8801c73b4740 ffff8801cfc68000 ffff8801a10cb180 ffff8801d13b0000
ffff8801db721018 ffff8801a68cf510 ffffffff828075c2 ffffffff83c27940
0000000041b58ab3 ffffffff82e33920 00ffffff83c7a7d0 ffff8801db7218f0
Call Trace:
[<ffffffff82808aef>] schedule+0x7f/0x1b0 kernel/sched/core.c:3553
[<ffffffff828142d5>] schedule_timeout+0x735/0xe20 kernel/time/timer.c:1771
[<ffffffff8280a63f>] do_wait_for_common kernel/sched/completion.c:75
[inline]
[<ffffffff8280a63f>] __wait_for_common kernel/sched/completion.c:93
[inline]
[<ffffffff8280a63f>] wait_for_common+0x3ef/0x5d0
kernel/sched/completion.c:101
[<ffffffff8280a838>] wait_for_completion+0x18/0x20
kernel/sched/completion.c:122
[<ffffffff81243b37>] __wait_rcu_gp+0x137/0x1b0 kernel/rcu/update.c:369
[<ffffffff8124c21a>] synchronize_rcu.part.55+0xfa/0x110
kernel/rcu/tree_plugin.h:684
[<ffffffff8124c257>] synchronize_rcu+0x27/0x90 kernel/rcu/tree_plugin.h:685
[<ffffffff8204f826>] mousedev_detach_client+0xf6/0x140
drivers/input/mousedev.c:520
[<ffffffff8204f8d0>] mousedev_release+0x60/0xc0
drivers/input/mousedev.c:528
[<ffffffff81510293>] __fput+0x263/0x700 fs/file_table.c:208
[<ffffffff815107b5>] ____fput+0x15/0x20 fs/file_table.c:244
[<ffffffff8113dc4c>] task_work_run+0x10c/0x180 kernel/task_work.c:116
[<ffffffff810e6c4d>] exit_task_work include/linux/task_work.h:21 [inline]
[<ffffffff810e6c4d>] do_exit+0x78d/0x2a50 kernel/exit.c:833
[<ffffffff810ed3a1>] do_group_exit+0x111/0x300 kernel/exit.c:937
[<ffffffff8110eb61>] get_signal+0x4e1/0x1460 kernel/signal.c:2321
[<ffffffff81052aa5>] do_signal+0x95/0x1b00 arch/x86/kernel/signal.c:807
[<ffffffff81003e2e>] exit_to_usermode_loop+0x10e/0x150
arch/x86/entry/common.c:158
[<ffffffff81005932>] prepare_exit_to_usermode arch/x86/entry/common.c:194
[inline]
[<ffffffff81005932>] syscall_return_slowpath arch/x86/entry/common.c:263
[inline]
[<ffffffff81005932>] do_syscall_64+0x3e2/0x550 arch/x86/entry/common.c:290
[<ffffffff82817893>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb

Showing all locks held in the system:
2 locks held by kworker/0:0/4:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&rew.rew_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by khungtaskd/24:
#0: (rcu_read_lock){......}, at: [<ffffffff8131c0cc>]
check_hung_uninterruptible_tasks kernel/hung_task.c:168 [inline]
#0: (rcu_read_lock){......}, at: [<ffffffff8131c0cc>]
watchdog+0x11c/0xa20 kernel/hung_task.c:239
#1: (tasklist_lock){.+.+..}, at: [<ffffffff813fe63f>]
debug_show_all_locks+0x79/0x218 kernel/locking/lockdep.c:4336
1 lock held by rsyslogd/1891:
#0: (&f->f_pos_lock){+.+.+.}, at: [<ffffffff8156cc7c>]
__fdget_pos+0xac/0xd0 fs/file.c:781
2 locks held by getty/2018:
#0: (&tty->ldisc_sem){++++++}, at: [<ffffffff82815952>]
ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
#1: (&ldata->atomic_read_lock){+.+.+.}, at: [<ffffffff81d37362>]
n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
2 locks held by kworker/0:3/5601:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by syz-executor.4/7104:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a7b7>]
exp_funnel_lock kernel/rcu/tree_exp.h:289 [inline]
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a7b7>]
_synchronize_rcu_expedited+0x3a7/0x840 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor.4/7117:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
2 locks held by syz-executor.5/9557:
#0: (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a749>]
exp_funnel_lock kernel/rcu/tree_exp.h:256 [inline]
#1: (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a749>]
_synchronize_rcu_expedited+0x339/0x840 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor.2/12436:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
1 lock held by syz-executor.2/12439:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
1 lock held by syz-executor.1/17576:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
1 lock held by syz-executor.1/17581:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
2 locks held by syz-executor.5/21388:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>] lock_sock
include/net/sock.h:1404 [inline]
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>]
packet_release+0x4ad/0xb70 net/packet/af_packet.c:3029
1 lock held by syz-executor.0/32119:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
2 locks held by syz-executor.5/1280:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>] lock_sock
include/net/sock.h:1404 [inline]
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>]
packet_release+0x4ad/0xb70 net/packet/af_packet.c:3029
2 locks held by kworker/1:5/3698:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:6/3699:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by syz-executor.0/3843:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>] lock_sock
include/net/sock.h:1404 [inline]
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>]
packet_release+0x4ad/0xb70 net/packet/af_packet.c:3029
2 locks held by syz-executor.0/6679:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>] lock_sock
include/net/sock.h:1404 [inline]
#1: (sk_lock-AF_PACKET){+.+.+.}, at: [<ffffffff827d136d>]
packet_release+0x4ad/0xb70 net/packet/af_packet.c:3029
2 locks held by kworker/0:1/7623:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
1 lock held by syz-executor.2/9520:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
3 locks held by kworker/u4:9/9656:
#0: ("%s""netns"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: (net_cleanup_work){+.+.+.}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
#2: (net_mutex){+.+.+.}, at: [<ffffffff822e681f>] cleanup_net+0x13f/0x8b0
net/core/net_namespace.c:439
1 lock held by syz-executor.1/11043:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
2 locks held by kworker/1:0/11902:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/0:2/14988:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/0:4/17920:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
3 locks held by kworker/0:5/17949:
#0: ("%s"("ipv6_addrconf")){.+.+..}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((addr_chk_work).work){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
#2: (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
2 locks held by syz-executor.1/18768:
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
inode_lock include/linux/fs.h:766 [inline]
#0: (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>]
__sock_release+0x8b/0x260 net/socket.c:604
#1: (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20
net/core/rtnetlink.c:70
2 locks held by kworker/1:1/21039:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:2/21675:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:3/21676:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:7/21678:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:8/21680:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:9/21681:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&map->work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/0:6/21682:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/0:7/21684:
#0: ("events"){.+.+.+}, at: [<ffffffff81130f0c>]
process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
#1: ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>]
process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
1 lock held by init/21686:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/21687:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/21688:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/21689:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/21690:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130
1 lock held by init/21691:
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open_by_driver
drivers/tty/tty_io.c:2052 [inline]
#0: (tty_mutex){+.+.+.}, at: [<ffffffff81d2bb96>] tty_open+0x476/0xdf0
drivers/tty/tty_io.c:2130

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.9.141+ #1
ffff8801d9907d08 ffffffff81b42e79 0000000000000000 0000000000000001
0000000000000001 0000000000000001 ffffffff810983b0 ffff8801d9907d40
ffffffff81b4df89 0000000000000001 0000000000000000 0000000000000002
Call Trace:
[<ffffffff81b42e79>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81b42e79>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
[<ffffffff81b4df89>] nmi_cpu_backtrace.cold.0+0x48/0x87
lib/nmi_backtrace.c:99
[<ffffffff81b4df1c>] nmi_trigger_cpumask_backtrace+0x12c/0x151
lib/nmi_backtrace.c:60
[<ffffffff810984b4>] arch_trigger_cpumask_backtrace+0x14/0x20
arch/x86/kernel/apic/hw_nmi.c:37
[<ffffffff8131c65d>] trigger_all_cpu_backtrace include/linux/nmi.h:58
[inline]
[<ffffffff8131c65d>] check_hung_task kernel/hung_task.c:125 [inline]
[<ffffffff8131c65d>] check_hung_uninterruptible_tasks
kernel/hung_task.c:182 [inline]
[<ffffffff8131c65d>] watchdog+0x6ad/0xa20 kernel/hung_task.c:239
[<ffffffff81142c3d>] kthread+0x26d/0x300 kernel/kthread.c:211
[<ffffffff82817a5c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 2057 Comm: syz-fuzzer Not tainted 4.9.141+ #1
task: ffff8801d1fe8000 task.stack: ffff8801cec20000
RIP: 0010:[<ffffffff81243be7>] c [<ffffffff81243be7>]
debug_lockdep_rcu_enabled.part.0+0x37/0x60 kernel/rcu/update.c:265
RSP: 0000:ffff8801cec273b0 EFLAGS: 00000002
RAX: 0000000000000007 RBX: ffff8801d1fe8000 RCX: 1ffffffff05cec80
RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8801d1fe88ac
RBP: ffff8801cec273b8 R08: ffff8801d1fe8998 R09: ddd68c299640ddb6
R10: 0000000000000000 R11: 0000000000000001 R12: ffff8801a8e417c0
R13: ffff8801d1fe8000 R14: ffff8801a8e41f80 R15: ffff8801a8e41ef8
FS: 000000c42005a768(0000) GS:ffff8801db600000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000411990 CR3: 00000001cf0c6000 CR4: 00000000001606b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Stack:
ffffffff830cc360 c ffff8801cec273c8 c ffffffff81243c87 c ffff8801cec27480 c
ffffffff8120d363 c 0000000000000000 c 0000000000000000 c ffff8801cec27460 c
0000000000000246 c 0000000000000000 c ffffffff8141a061 c ffff880100000000 c
Call Trace:
[<ffffffff81243c87>] debug_lockdep_rcu_enabled+0x77/0x90
kernel/rcu/update.c:264
[<ffffffff8120d363>] trace_lock_release include/trace/events/lock.h:57
[inline]
[<ffffffff8120d363>] lock_release+0x7e3/0xc20 kernel/locking/lockdep.c:3774
[<ffffffff8141a0cb>] rcu_lock_release include/linux/rcupdate.h:498 [inline]
[<ffffffff8141a0cb>] rcu_read_unlock include/linux/rcupdate.h:931 [inline]
[<ffffffff8141a0cb>] find_lock_task_mm+0x15b/0x270 mm/oom_kill.c:122
[<ffffffff821effdf>] lowmem_scan+0x34f/0xaf0
drivers/staging/android/lowmemorykiller.c:134
[<ffffffff81449cc6>] do_shrink_slab mm/vmscan.c:398 [inline]
[<ffffffff81449cc6>] shrink_slab.part.8+0x3c6/0xa00 mm/vmscan.c:501
[<ffffffff814557fd>] shrink_slab mm/vmscan.c:465 [inline]
[<ffffffff814557fd>] shrink_node+0x1ed/0x740 mm/vmscan.c:2602
[<ffffffff814560c7>] shrink_zones mm/vmscan.c:2749 [inline]
[<ffffffff814560c7>] do_try_to_free_pages mm/vmscan.c:2791 [inline]
[<ffffffff814560c7>] try_to_free_pages+0x377/0xb80 mm/vmscan.c:3002
[<ffffffff81428a01>] __perform_reclaim mm/page_alloc.c:3324 [inline]
[<ffffffff81428a01>] __alloc_pages_direct_reclaim mm/page_alloc.c:3345
[inline]
[<ffffffff81428a01>] __alloc_pages_slowpath mm/page_alloc.c:3697 [inline]
[<ffffffff81428a01>] __alloc_pages_nodemask+0x981/0x1bd0
mm/page_alloc.c:3862
[<ffffffff81415701>] __alloc_pages include/linux/gfp.h:433 [inline]
[<ffffffff81415701>] __alloc_pages_node include/linux/gfp.h:446 [inline]
[<ffffffff81415701>] alloc_pages_node include/linux/gfp.h:460 [inline]
[<ffffffff81415701>] __page_cache_alloc include/linux/pagemap.h:208
[inline]
[<ffffffff81415701>] page_cache_read mm/filemap.c:2007 [inline]
[<ffffffff81415701>] filemap_fault+0xaf1/0x1110 mm/filemap.c:2192
[<ffffffff816e7721>] ext4_filemap_fault+0x71/0xa0 fs/ext4/inode.c:5853
[<ffffffff81492ef3>] __do_fault+0x223/0x500 mm/memory.c:2833
[<ffffffff814a3696>] do_read_fault mm/memory.c:3180 [inline]
[<ffffffff814a3696>] do_fault mm/memory.c:3315 [inline]
[<ffffffff814a3696>] handle_pte_fault mm/memory.c:3516 [inline]
[<ffffffff814a3696>] __handle_mm_fault mm/memory.c:3603 [inline]
[<ffffffff814a3696>] handle_mm_fault+0x1326/0x2350 mm/memory.c:3640
[<ffffffff810b2b33>] __do_page_fault+0x403/0xa60 arch/x86/mm/fault.c:1406
[<ffffffff810b31e7>] do_page_fault+0x27/0x30 arch/x86/mm/fault.c:1469
[<ffffffff828188b5>] page_fault+0x25/0x30 arch/x86/entry/entry_64.S:951
Code: c89 ce5 c53 c65 c48 c8b c1c c25 c00 c7e c01 c00 c48 c8d
cbb cac c08 c00 c00 c48 c89 cfa c48 cc1 cea c03 c0f cb6 c14
c02 c48 c89 cf8 c83 ce0 c07 c83 cc0 c03 c38 cd0 c7c c04
c<84> cd2 c75 c10 c8b c93 cac c08 c00 c00 c31 cc0 c5b c5d
c85 cd2 c0f c94 cc0 cc3 ce8 c


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jan 13, 2020, 2:53:06 PM1/13/20
to syzkaller-a...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages