kvm: recursive lock in kvm_clear_async_pf_completion_queue

12 views
Skip to first unread message

Dmitry Vyukov

unread,
Nov 12, 2016, 3:48:28 PM11/12/16
to Paolo Bonzini, rkr...@redhat.com, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x...@kernel.org, KVM list, LKML, Steve Rutherford, syzkaller
Hello,

I've got the following report while running syzkaller fuzzer.
On commit 015ed9433be2b476ec7e2e6a9a411a56e3b5b035 (Nov 11).


[ INFO: possible recursive locking detected ]
4.9.0-rc4+ #49 Not tainted
---------------------------------------------
kworker/2:1/5658 is trying to acquire lock:
([ 1644.769018] (&work->work)
[< inline >] list_empty include/linux/compiler.h:243
[<ffffffff8128dd60>] flush_work+0x0/0x660 kernel/workqueue.c:1511

but task is already holding lock:
([ 1644.769018] (&work->work)
[<ffffffff812916ab>] process_one_work+0x94b/0x1900 kernel/workqueue.c:2093

other info that might help us debug this:
Possible unsafe locking scenario:

CPU0
----
lock([ 1644.828089] (&work->work)
);
lock([ 1644.828089] (&work->work)
);

*** DEADLOCK ***

May be due to missing lock nesting notation

2 locks held by kworker/2:1/5658:
#0: [ 1644.832297] (
#1: [ 1644.850117] (

stack backtrace:
CPU: 2 PID: 5658 Comm: kworker/2:1 Not tainted 4.9.0-rc4+ #49
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Workqueue: events async_pf_execute
ffff8800676ff630 ffffffff81c2e46b ffffffff8485b930 ffff88006b1fc480
0000000000000000 ffffffff8485b930 ffff8800676ff7e0 ffffffff81339b27
ffff8800676ff7e8 0000000000000046 ffff88006b1fcce8 ffff88006b1fccf0
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<ffffffff81c2e46b>] dump_stack+0xb3/0x118 lib/dump_stack.c:51
[< inline >] print_deadlock_bug kernel/locking/lockdep.c:1728
[< inline >] check_deadlock kernel/locking/lockdep.c:1772
[< inline >] validate_chain kernel/locking/lockdep.c:2250
[<ffffffff81339b27>] __lock_acquire+0x1157/0x3630 kernel/locking/lockdep.c:3335
[<ffffffff8133cb19>] lock_acquire+0x169/0x330 kernel/locking/lockdep.c:3746
[<ffffffff8128ddf3>] flush_work+0x93/0x660 kernel/workqueue.c:2846
[<ffffffff812954ea>] __cancel_work_timer+0x17a/0x410 kernel/workqueue.c:2916
[<ffffffff81295797>] cancel_work_sync+0x17/0x20 kernel/workqueue.c:2951
[<ffffffff81073037>] kvm_clear_async_pf_completion_queue+0xd7/0x400
arch/x86/kvm/../../../virt/kvm/async_pf.c:126
[< inline >] kvm_free_vcpus arch/x86/kvm/x86.c:7841
[<ffffffff810b728d>] kvm_arch_destroy_vm+0x23d/0x620 arch/x86/kvm/x86.c:7946
[< inline >] kvm_destroy_vm
arch/x86/kvm/../../../virt/kvm/kvm_main.c:731
[<ffffffff8105914e>] kvm_put_kvm+0x40e/0x790
arch/x86/kvm/../../../virt/kvm/kvm_main.c:752
[<ffffffff81072b3d>] async_pf_execute+0x23d/0x4f0
arch/x86/kvm/../../../virt/kvm/async_pf.c:111
[<ffffffff8129175c>] process_one_work+0x9fc/0x1900 kernel/workqueue.c:2096
[<ffffffff8129274f>] worker_thread+0xef/0x1480 kernel/workqueue.c:2230
[<ffffffff812a5a94>] kthread+0x244/0x2d0 kernel/kthread.c:209
[<ffffffff831f102a>] ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:433
Reply all
Reply to author
Forward
0 new messages