[v6.1] INFO: task hung in read_part_sector

1 view
Skip to first unread message

syzbot

unread,
Feb 7, 2024, 2:34:27 AMFeb 7
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: f1bb70486c9c Linux 6.1.77
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=13e895b7e80000
kernel config: https://syzkaller.appspot.com/x/.config?x=39447811cb133e7e
dashboard link: https://syzkaller.appspot.com/bug?extid=6311d6b5550cf4b97421
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/f93cb7e9dad2/disk-f1bb7048.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/22703d1d86ee/vmlinux-f1bb7048.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4129725af309/bzImage-f1bb7048.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6311d6...@syzkaller.appspotmail.com

INFO: task syz-executor.4:7963 blocked for more than 143 seconds.
Not tainted 6.1.77-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:26296 pid:7963 ppid:3579 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
io_schedule+0x88/0x100 kernel/sched/core.c:8786
folio_wait_bit_common+0x878/0x1290 mm/filemap.c:1296
folio_put_wait_locked mm/filemap.c:1465 [inline]
do_read_cache_folio+0xb9/0x810 mm/filemap.c:3580
read_mapping_folio include/linux/pagemap.h:797 [inline]
read_part_sector+0xcb/0x350 block/partitions/core.c:722
adfspart_check_ICS+0xd5/0x990 block/partitions/acorn.c:360
check_partition block/partitions/core.c:146 [inline]
blk_add_partitions block/partitions/core.c:607 [inline]
bdev_disk_changed+0x79e/0x1460 block/partitions/core.c:693
blkdev_get_whole+0x2dd/0x360 block/bdev.c:687
blkdev_get_by_dev+0x321/0xa10 block/bdev.c:824
blkdev_open+0x12e/0x2e0 block/fops.c:500
do_dentry_open+0x7f9/0x10f0 fs/open.c:882
do_open fs/namei.c:3628 [inline]
path_openat+0x2644/0x2e60 fs/namei.c:3785
do_filp_open+0x230/0x480 fs/namei.c:3812
do_sys_openat2+0x13b/0x500 fs/open.c:1318
do_sys_open fs/open.c:1334 [inline]
__do_sys_openat fs/open.c:1350 [inline]
__se_sys_openat fs/open.c:1345 [inline]
__x64_sys_openat+0x243/0x290 fs/open.c:1345
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f63d727c9a0
RSP: 002b:00007f63d7f13c00 EFLAGS: 00000293 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f63d727c9a0
RDX: 0000000000000000 RSI: 00007f63d7f13ca0 RDI: 00000000ffffff9c
RBP: 00007f63d7f13ca0 R08: 0000000000000000 R09: 002364626e2f7665
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000
R13: 000000000000000b R14: 00007f63d73abf80 R15: 00007ffff4091288
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8d12a910 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8d12b110 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by khungtaskd/28:
#0: ffffffff8d12a740 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:318 [inline]
#0: ffffffff8d12a740 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:759 [inline]
#0: ffffffff8d12a740 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6494
2 locks held by getty/3308:
#0: ffff88814b5fc098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2188
4 locks held by kworker/u4:6/3679:
1 lock held by syz-executor.4/7963:
#0: ffff8881443984c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x148/0xa10 block/bdev.c:815
1 lock held by syz-executor.4/8417:
#0: ffff8881443984c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x148/0xa10 block/bdev.c:815
1 lock held by syz-executor.4/8806:
#0: ffff8881443984c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x148/0xa10 block/bdev.c:815
1 lock held by syz-executor.4/9118:
#0: ffff8881443984c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x148/0xa10 block/bdev.c:815

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.77-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xf88/0xfd0 kernel/hung_task.c:377
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 4667 Comm: kworker/u4:9 Not tainted 6.1.77-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: phy22 ieee80211_iface_work
RIP: 0010:__stack_depot_save+0x15c/0x470 lib/stackdepot.c:452
Code: 0e 29 c5 41 31 ed c1 c5 18 41 29 ed eb 03 44 89 e7 48 8b 05 de dc 9c 0d 8b 1d d4 dc 9c 0d 44 21 eb 48 89 44 24 10 4c 8b 34 d8 <4c> 89 c5 41 89 ec eb 03 4d 8b 36 4d 85 f6 74 2a 45 39 6e 08 75 f2
RSP: 0018:ffffc90014647398 EFLAGS: 00000206
RAX: ffff88823b400000 RBX: 00000000000dd714 RCX: 0000000009e8751a
RDX: ffffc90014647454 RSI: 0000000000000001 RDI: 0000000000000800
RBP: 0000000006d5ed43 R08: 000000000000000b R09: 0000000000000001
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000800
R13: 000000006e9dd714 R14: ffff8880522da290 R15: ffffc90014647400
FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c01d40fda0 CR3: 000000000ce8e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
kasan_save_stack mm/kasan/common.c:46 [inline]
kasan_set_track+0x60/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x27/0x40 mm/kasan/generic.c:516
____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:177 [inline]
slab_free_hook mm/slub.c:1724 [inline]
slab_free_freelist_hook mm/slub.c:1750 [inline]
slab_free mm/slub.c:3661 [inline]
__kmem_cache_free+0x25c/0x3c0 mm/slub.c:3674
ieee80211_bss_info_update+0xa54/0xf00 net/mac80211/scan.c:223
ieee80211_rx_bss_info net/mac80211/ibss.c:1120 [inline]
ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1609 [inline]
ieee80211_ibss_rx_queued_mgmt+0x1962/0x2dd0 net/mac80211/ibss.c:1638
ieee80211_iface_process_skb net/mac80211/iface.c:1632 [inline]
ieee80211_iface_work+0x7aa/0xce0 net/mac80211/iface.c:1686
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages