About the Use of sfence.vma in Kernel

771 views
Skip to first unread message

Alan Kao

unread,
Nov 1, 2018, 5:00:24 AM11/1/18
to linux...@lists.infradead.org, sw-...@groups.riscv.org, gree...@andestech.com
Hi all,

As mentioned in the Privileged Spec about sfence.vma instruction:

> The supervisor memory-management fence instruction SFENCE.VMA is used
> to synchronize updates to in-memory memory-management data structures
> with current execution. Instruction execution causes implicit reads
> and writes to these data structures; however, these implicit references
> are ordinarily not ordered with respect to loads and stores in the instruction
> stream.
>
> Executing an SFENCE.VMA instruction guarantees that any stores in the
> instruction stream prior to the SFENCE.VMA are ordered before all implicit
> references subsequent to the SFENCE.VMA.

It naturally follows that we should use sfence.vma once the page table is
modified. There are several examples in the kernel already, such as

alloc_set_pte (in mm/memory.c):
...
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
/* no need to invalidate: a not-present page won't be cached */
update_mmu_cache(vma, vmf->address, vmf->pte);
...
where the update_mmu_cache function eventually issues a sfence.vma.

I was interested if it is always the case and did some research. RV64 uses
3-level of page table entry, pud, pmd and pte, so I traced a little bit about
the code flow after set_pud, set_pmd and set_pte.

It turns out that some of the calls to them are not followed by a
sfence.vma. For an instance, in the vmalloc_fault region in do_page_fault,
there is no sfence.vma or calls to it after set_pgd, which directs to set_pud
later.

Are they bugs or I just misunderstand the instruction? As the kernel has
already been stable for quite a while now, it is not likely to be a critical
bug.

Any clarification will highly appreciated.

Many thanks,
Alan Kao

Alan Kao

unread,
Nov 4, 2018, 7:49:47 PM11/4/18
to pal...@sifive.com, linux...@lists.infradead.org, sw-...@groups.riscv.org, gree...@andestech.com
Hi Palmer,

I believe the code in arch/riscv/mm/fault.c is mostly from you.
Do you have any comments on this?
> --
> You received this message because you are subscribed to the Google Groups "RISC-V SW Dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sw-dev+un...@groups.riscv.org.
> To post to this group, send email to sw-...@groups.riscv.org.
> Visit this group at https://groups.google.com/a/groups.riscv.org/group/sw-dev/.
> To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/sw-dev/20181101090015.GA6997%40andestech.com.

Palmer Dabbelt

unread,
Nov 5, 2018, 9:33:26 PM11/5/18
to ala...@andestech.com, linux...@lists.infradead.org, sw-...@groups.riscv.org, gree...@andestech.com
Sorry, I missedy our original email.
This specific one looks like a bug: we're trying to fill out the page table for
the vmalloc region, but we'll just continue trapping without an "sfence.vma".
The path between poking the page tables and the sret is pretty short and
doesn't appear to ever have an "sfence.vma", so I'm not sure how this could
work.

>>
>> Are they bugs or I just misunderstand the instruction? As the kernel has
>> already been stable for quite a while now, it is not likely to be a critical
>> bug.
>>
>> Any clarification will highly appreciated.

Well, certainly from this it looks pretty broken -- and in a manner I'd expect
to trigger frequently. There are no fences in any of the other similar-looking
implementations.

Maybe I'm missing something here?

FWIW, if I apply the following diff

diff --git a/arch/riscv/kernel/reset.c b/arch/riscv/kernel/reset.c
index 2a53d26ffdd6..fbd132d388fb 100644
--- a/arch/riscv/kernel/reset.c
+++ b/arch/riscv/kernel/reset.c
@@ -15,6 +15,8 @@
#include <linux/export.h>
#include <asm/sbi.h>

+extern long vmalloc_faults;
+
void (*pm_power_off)(void) = machine_power_off;
EXPORT_SYMBOL(pm_power_off);

@@ -31,6 +33,7 @@ void machine_halt(void)

void machine_power_off(void)
{
+ printk("vmalloc faults: %ld\n", vmalloc_faults);
sbi_shutdown();
while (1);
}
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 88401d5125bc..61ef1128632c 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -30,6 +30,8 @@
#include <asm/pgalloc.h>
#include <asm/ptrace.h>

+long vmalloc_faults = 0;
+
/*
* This routine handles page faults. It determines the address and the
* problem, and then passes it off to one of the appropriate routines.
@@ -281,6 +283,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
pte_k = pte_offset_kernel(pmd_k, addr);
if (!pte_present(*pte_k))
goto no_context;
+
+ vmalloc_faults++;
return;
}
}

I get only a single vmalloc fault when doing a boot+shutdown of Fedora in QEMU,
so maybe this just slipped through the cracks?

Alan Kao

unread,
Nov 6, 2018, 2:49:15 AM11/6/18
to Palmer Dabbelt, linux...@lists.infradead.org, sw-...@groups.riscv.org, gree...@andestech.com
Thanks for the response!
I have some discussion with our hardware guys and architects previously.
Their hypothesis is that, the translation hardware on your board could somehow
snoop dcache, so that a update like this seamlessly work out any subsequent
VA-to-PA process. Would you like to help verify this?

riscv_virt board on QEMU doesn't matter because translation is not modeled.

> >>
> >>Are they bugs or I just misunderstand the instruction? As the kernel has
> >>already been stable for quite a while now, it is not likely to be a critical
> >>bug.
> >>
> >>Any clarification will highly appreciated.
>
> Well, certainly from this it looks pretty broken -- and in a manner I'd
> expect to trigger frequently. There are no fences in any of the other
> similar-looking implementations.
>
> Maybe I'm missing something here?

Actually, this set_pud is just one instance of this problem.

As mentioned in the previous mail, I've traced the current codebase to see which
functions call either set_pte, set_pmd, or set_pud. Under my compiler
optimization setting and environment, I found the following functions containing
inlined et_p** but no obvious sfence.vma follows, just for your information:

PUD cases:
__pmd_alloc
pud_clear_bad

PMD cases:
pmd_clear_bad
__pte_alloc_kernel

Both PUD and PMD:
free_pgd_range
do_page_fault

PTE cases:
unmap_page_range
remap_pfn_range
copy_page_range
vm_insert_page
change_protection_range
page_mkclean_one
try_to_unmap_one
vmap_page_range_noflush
madvise_free_pte_range
remove_migration_pte
ioremap_page_range

Some also falls in the grey area. For example, finish_mkwrite_fault
has an instruction sequence like

> sd s3,0(s1) // *ptep = pte
> ld a5,24(s2)
> sfence.vma a5

in which sfence.vma cannot follow by the PTE update immediately because
we would like to load the PTE value into $a5, which stands for the leaf PTE.
Thanks for the experiment, but as there are many other cases listed above,
maybe the fastest way to figure this mysterious out is to check the details
from your hardware people? IMHO the hypothesis sounds possible. Once that
is determined, we can figure out what to do to these pagetable updates
with ease.

Alan

Andreas Schwab

unread,
Nov 6, 2018, 3:03:22 AM11/6/18
to Palmer Dabbelt, ala...@andestech.com, linux...@lists.infradead.org, sw-...@groups.riscv.org, gree...@andestech.com
Perhaps that't the reason I sometimes get errors like this:

[303420.500000] swap_info_get: Bad swap file entry 2000000000df09b1
[303420.500000] BUG: Bad page map in process struct-ret-3.ex pte:6f84d8c0 pmd:8f579001
[303420.510000] addr:000000008a1610cd vm_flags:00100173 anon_vma:0000000036036fc9 mapping: (null) index:3ffffe7
[303420.520000] file: (null) fault: (null) mmap: (null) readpage: (null)
[303420.530000] CPU: 1 PID: 2054 Comm: struct-ret-3.ex Not tainted 4.19.0-00040-g2d5ee99e76 #42
[303420.530000] Call Trace:
[303420.530000] [<ffffffe000c847d4>] walk_stackframe+0x0/0xa4
[303420.530000] [<ffffffe000c849d4>] show_stack+0x2a/0x34
[303420.530000] [<ffffffe0011a6800>] dump_stack+0x62/0x7c
[303420.530000] [<ffffffe000d55942>] print_bad_pte+0x146/0x18e
[303420.530000] [<ffffffe000d56df6>] unmap_page_range+0x33a/0x5ea
[303420.530000] [<ffffffe000d570d4>] unmap_single_vma+0x2e/0x40
[303420.530000] [<ffffffe000d57278>] unmap_vmas+0x42/0x7a
[303420.530000] [<ffffffe000d5cb92>] exit_mmap+0x7e/0x106
[303420.530000] [<ffffffe000c872aa>] mmput.part.2+0x26/0xa0
[303420.530000] [<ffffffe000c87344>] mmput+0x20/0x28
[303420.530000] [<ffffffe000c8ba24>] do_exit+0x238/0x7c0
[303420.530000] [<ffffffe000c8c006>] do_group_exit+0x2a/0x82
[303420.530000] [<ffffffe000c8c076>] __wake_up_parent+0x0/0x22
[303420.530000] [<ffffffe000c83722>] ret_from_syscall+0x0/0xe

Andreas.

--
Andreas Schwab, SUSE Labs, sch...@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE 1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

Palmer Dabbelt

unread,
Nov 7, 2018, 10:51:35 AM11/7/18
to sch...@suse.de, ala...@andestech.com, linux...@lists.infradead.org, sw-...@groups.riscv.org, gree...@andestech.com
On Tue, 06 Nov 2018 00:03:19 PST (-0800), sch...@suse.de wrote:
> Perhaps that't the reason I sometimes get errors like this:
>
> [303420.500000] swap_info_get: Bad swap file entry 2000000000df09b1
> [303420.500000] BUG: Bad page map in process struct-ret-3.ex pte:6f84d8c0 pmd:8f579001
> [303420.510000] addr:000000008a1610cd vm_flags:00100173 anon_vma:0000000036036fc9 mapping: (null) index:3ffffe7
> [303420.520000] file: (null) fault: (null) mmap: (null) readpage: (null)
> [303420.530000] CPU: 1 PID: 2054 Comm: struct-ret-3.ex Not tainted 4.19.0-00040-g2d5ee99e76 #42
> [303420.530000] Call Trace:
> [303420.530000] [<ffffffe000c847d4>] walk_stackframe+0x0/0xa4
> [303420.530000] [<ffffffe000c849d4>] show_stack+0x2a/0x34
> [303420.530000] [<ffffffe0011a6800>] dump_stack+0x62/0x7c
> [303420.530000] [<ffffffe000d55942>] print_bad_pte+0x146/0x18e
> [303420.530000] [<ffffffe000d56df6>] unmap_page_range+0x33a/0x5ea
> [303420.530000] [<ffffffe000d570d4>] unmap_single_vma+0x2e/0x40
> [303420.530000] [<ffffffe000d57278>] unmap_vmas+0x42/0x7a
> [303420.530000] [<ffffffe000d5cb92>] exit_mmap+0x7e/0x106
> [303420.530000] [<ffffffe000c872aa>] mmput.part.2+0x26/0xa0
> [303420.530000] [<ffffffe000c87344>] mmput+0x20/0x28
> [303420.530000] [<ffffffe000c8ba24>] do_exit+0x238/0x7c0
> [303420.530000] [<ffffffe000c8c006>] do_group_exit+0x2a/0x82
> [303420.530000] [<ffffffe000c8c076>] __wake_up_parent+0x0/0x22
> [303420.530000] [<ffffffe000c83722>] ret_from_syscall+0x0/0xe

From my understanding of it, this should manifest as an infinite loop:

* The implementation will raise an exception due to an unmapped page.
* The trap handler will then go fix up that exception and attempt to re-execute
the offending instruction.
* The implementation will raise the exception again because its translation caches
haven't been updated.

This requires an implementation that caches invalid mappings. IIRC we don't do
that in QEMU or in Rocket.
Reply all
Reply to author
Forward
0 new messages