KASAN: use-after-free Read in snd_seq_queue_alloc

14 views
Skip to first unread message

syzbot

unread,
Oct 31, 2017, 7:57:50 AM10/31/17
to syzkaller-upst...@googlegroups.com
Hello,

syzkaller hit the following crash on
91dfed74eabcdae9378131546c446442c29bf769
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
compiler: gcc (GCC) 7.1.1 20170620
.config is attached
Raw console output is attached.

syzkaller reproducer is attached. See https://goo.gl/kgGztJ
for information about syzkaller reproducers
CC: [pe...@perex.cz ti...@suse.com alsa-...@alsa-project.org
linux-...@vger.kernel.org]

==================================================================
BUG: KASAN: use-after-free in snd_seq_queue_alloc+0x558/0x590
sound/core/seq/seq_queue.c:202
Read of size 4 at addr ffff88003a63b540 by task syz-executor0/4566

CPU: 1 PID: 4566 Comm: syz-executor0 Not tainted 4.13.0-rc4-next-20170811 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
print_address_description+0x7f/0x260 mm/kasan/report.c:252
kasan_report_error mm/kasan/report.c:351 [inline]
kasan_report+0x24e/0x340 mm/kasan/report.c:409
__asan_report_load4_noabort+0x14/0x20 mm/kasan/report.c:429
snd_seq_queue_alloc+0x558/0x590 sound/core/seq/seq_queue.c:202
snd_seq_ioctl_create_queue+0xad/0x310 sound/core/seq/seq_clientmgr.c:1508
snd_seq_ioctl+0x204/0x400 sound/core/seq/seq_clientmgr.c:2130
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x446739
RSP: 002b:00007f61cecfac08 EFLAGS: 00000282 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 0000000000446739
RDX: 0000000020045000 RSI: 00000000c08c5332 RDI: 0000000000000004
RBP: 0000000000000086 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000282 R12: 00000000ffffffff
R13: 0000000000002690 R14: 00000000006e4750 R15: 00000000408c5333

Allocated by task 4566:
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:447
set_track mm/kasan/kasan.c:459 [inline]
kasan_kmalloc+0xaa/0xd0 mm/kasan/kasan.c:551
kmem_cache_alloc_trace+0x108/0x700 mm/slab.c:3627
kzalloc include/linux/slab.h:493 [inline]
queue_new sound/core/seq/seq_queue.c:113 [inline]
snd_seq_queue_alloc+0xa5/0x590 sound/core/seq/seq_queue.c:193
snd_seq_ioctl_create_queue+0xad/0x310 sound/core/seq/seq_clientmgr.c:1508
snd_seq_ioctl+0x204/0x400 sound/core/seq/seq_clientmgr.c:2130
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xbe

Freed by task 4592:
save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
save_stack+0x43/0xd0 mm/kasan/kasan.c:447
set_track mm/kasan/kasan.c:459 [inline]
kasan_slab_free+0x6e/0xc0 mm/kasan/kasan.c:524
__cache_free mm/slab.c:3503 [inline]
kfree+0xd3/0x260 mm/slab.c:3820
queue_delete+0x90/0xb0 sound/core/seq/seq_queue.c:156
snd_seq_queue_delete+0x3c/0x50 sound/core/seq/seq_queue.c:215
snd_seq_ioctl_delete_queue+0x6a/0x90 sound/core/seq/seq_clientmgr.c:1534
snd_seq_ioctl+0x204/0x400 sound/core/seq/seq_clientmgr.c:2130
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xbe

The buggy address belongs to the object at ffff88003a63b540
which belongs to the cache kmalloc-512 of size 512
The buggy address is located 0 bytes inside of
512-byte region [ffff88003a63b540, ffff88003a63b740)
The buggy address belongs to the page:
page:ffffea0000cc5ce8 count:1 mapcount:0 mapping:ffff88003a63b040 index:0x0
flags: 0x100000000000100(slab)
raw: 0100000000000100 ffff88003a63b040 0000000000000000 0000000100000006
raw: ffffea0000d8caf8 ffffea0000d85430 ffff88003e800600
page dumped because: kasan: bad access detected

Memory state around the buggy address:
ffff88003a63b400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88003a63b480: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
> ffff88003a63b500: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
^
ffff88003a63b580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff88003a63b600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================


---
This bug is generated by a dumb bot. It may contain errors.
See https://goo.gl/tpsmEJ for details.
Direct all questions to syzk...@googlegroups.com.
Please credit me with: Reported-by: syzbot <syzk...@googlegroups.com>

syzbot will keep track of this bug report.
Once a fix for this bug is committed, please reply to this email with:
#syz fix: exact-commit-title
To mark this as a duplicate of another syzbot report, please reply with:
#syz dup: exact-subject-of-another-report
If it's a one-off invalid bug report, please reply with:
#syz invalid
Note: if the crash happens again, it will cause creation of a new bug
report.
Note: all commands must start from beginning of the line.
To upstream this report, please reply with:
#syz upstream
config.txt
raw.log
repro.txt

Dmitry Vyukov

unread,
Oct 31, 2017, 8:52:55 AM10/31/17
to syzbot, 'Dmitry Vyukov' via syzkaller-upstream-moderation
Seem to be fixed by "ALSA: seq: 2nd attempt at fixing race creating a queue":

#syz invalid

On Tue, Oct 31, 2017 at 2:57 PM, syzbot
<bot+55476b5f9523b2dacd...@syzkaller.appspotmail.com>
wrote:
> --
> You received this message because you are subscribed to the Google Groups
> "syzkaller-upstream-moderation" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to syzkaller-upstream-m...@googlegroups.com.
> To post to this group, send email to
> syzkaller-upst...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/syzkaller-upstream-moderation/001a1143e7aad057ba055cd676e5%40google.com.
> For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages