BUG: unable to handle kernel paging request in restore_regulatory_settings

11 views
Skip to first unread message

syzbot

unread,
May 3, 2021, 2:10:20 AM5/3/21
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: e3d35712 Add linux-next specific files for 20210423
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=1709a3f5d00000
kernel config: https://syzkaller.appspot.com/x/.config?x=e370221d7500b26a
dashboard link: https://syzkaller.appspot.com/bug?extid=ddf4cccd9c0396cea6b2
CC: [da...@davemloft.net joha...@sipsolutions.net ku...@kernel.org linux-...@vger.kernel.org linux-w...@vger.kernel.org net...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ddf4cc...@syzkaller.appspotmail.com

BUG: unable to handle page fault for address: ffffffff0000368e
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD bc8f067 P4D bc8f067 PUD 0
Oops: 0000 [#1] PREEMPT SMP KASAN
CPU: 0 PID: 8 Comm: kworker/0:2 Not tainted 5.12.0-rc8-next-20210423-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: events_power_efficient crda_timeout_work
RIP: 0010:restore_regulatory_settings+0x73c/0x1780 net/wireless/reg.c:3474
Code: 26 f9 48 8b 04 24 48 8d b8 48 06 00 00 48 89 f8 48 c1 e8 03 0f b6 04 18 84 c0 74 08 3c 03 0f 8e 6f 0d 00 00 48 8b 04 24 31 ff <8b> a8 48 06 00 00 41 89 ec 41 81 e4 80 00 00 00 44 89 e6 e8 8c fd
RSP: 0018:ffffc90000cd7c30 EFLAGS: 00010246
RAX: ffffffff00003046 RBX: dffffc0000000000 RCX: 0000000000000000
RDX: ffff888012325580 RSI: ffffffff884e0354 RDI: 0000000000000000
RBP: 0000000000000000 R08: 0000000000000000 R09: ffffffff902078c7
R10: ffffffff884e03ae R11: 0000000000000030 R12: 0000000000000000
R13: dead000000000100 R14: ffffffff8d99a940 R15: ffffffff8d99a940
FS: 0000000000000000(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffffff0000368e CR3: 00000000145b9000 CR4: 00000000001506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
crda_timeout_work+0x2c/0x50 net/wireless/reg.c:532
process_one_work+0x98d/0x1600 kernel/workqueue.c:2275
worker_thread+0x64c/0x1120 kernel/workqueue.c:2421
kthread+0x3b1/0x4a0 kernel/kthread.c:313
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
Modules linked in:

CR2: ffffffff0000368e
---[ end trace 90075a34672ec380 ]---
RIP: 0010:restore_regulatory_settings+0x73c/0x1780 net/wireless/reg.c:3474
Code: 26 f9 48 8b 04 24 48 8d b8 48 06 00 00 48 89 f8 48 c1 e8 03 0f b6 04 18 84 c0 74 08 3c 03 0f 8e 6f 0d 00 00 48 8b 04 24 31 ff <8b> a8 48 06 00 00 41 89 ec 41 81 e4 80 00 00 00 44 89 e6 e8 8c fd
RSP: 0018:ffffc90000cd7c30 EFLAGS: 00010246

RAX: ffffffff00003046 RBX: dffffc0000000000 RCX: 0000000000000000
RDX: ffff888012325580 RSI: ffffffff884e0354 RDI: 0000000000000000
RBP: 0000000000000000 R08: 0000000000000000 R09: ffffffff902078c7
R10: ffffffff884e03ae R11: 0000000000000030 R12: 0000000000000000
R13: dead000000000100 R14: ffffffff8d99a940 R15: ffffffff8d99a940
FS: 0000000000000000(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffffff0000368e CR3: 00000000145b9000 CR4: 00000000001506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jul 2, 2021, 7:02:21 AM7/2/21
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages