Re: [PATCH] mm/hugetlb: fix deadlock in __hugetlb_zap_begin() by using trylock

1 view
Skip to first unread message

Hillf Danton

unread,
May 13, 2026, 10:42:47 PM (2 days ago) May 13
to Kartik Nair, mho...@suse.com, linu...@kvack.org, linux-...@vger.kernel.org, syzbot+bd6aaf...@syzkaller.appspotmail.com, syzkall...@googlegroups.com
On Thu, 14 May 2026 02:49:27 +0530 Kartik Nair wrote:
> syzbot reported a circular locking dependency involving
> resv_map->rw_sema and mmap_lock:
>
> CPU0 CPU1
> lock(&mm->mmap_lock)
> lock(sk_lock-AF_INET6)
> lock(&mm->mmap_lock)
> lock(&resv_map->rw_sema)
>
> __hugetlb_zap_begin() calls hugetlb_vma_lock_write() which does a
> blocking down_write() on either vma_lock->rw_sema or
> resv_map->rw_sema while mmap_lock is already held for write by the
> caller chain (vm_mmap_pgoff -> mmap_region -> __mmap_region ->
> unmap_region -> unmap_vmas -> hugetlb_zap_begin).
>
> Fix this by converting __hugetlb_zap_begin() to use
> hugetlb_vma_trylock_write() instead of hugetlb_vma_lock_write().
> If the trylock fails, return false to the callers so they can skip
> the zap operation safely. Update hugetlb_zap_begin() and its callers
> in unmap_vmas() and zap_vma_range_batched() accordingly.
>
Given q->q_usage_counter in the syzbot report [1] and the correct
locking order in ffa1e7ada456 ("block: Make request_queue lockdep
splats show up earlier"), I suspect change to hugetlb is needed.

[1] https://lore.kernel.org/lkml/6a02edcf.170a022...@google.com/
Reply all
Reply to author
Forward
0 new messages