Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

[Bug 198661] New: KASAN: add checks to DMA transfers

21 views
Skip to first unread message

bugzill...@bugzilla.kernel.org

unread,
Feb 4, 2018, 6:07:45 AM2/4/18
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

Bug ID: 198661
Summary: KASAN: add checks to DMA transfers
Product: Memory Management
Version: 2.5
Kernel Version: ALL
Hardware: All
OS: Linux
Tree: Mainline
Status: NEW
Severity: enhancement
Priority: P1
Component: Sanitizers
Assignee: mm_san...@kernel-bugs.kernel.org
Reporter: dvy...@google.com
CC: kasa...@googlegroups.com
Regression: No

We have a case where [presumably] DMA transfers in ATA corrupt memory:
https://groups.google.com/forum/#!topic/syzkaller/9RNr9Gu0MyY
That's left unnoticed and then causes dozens of assorted crashes everywhere.
We should add KASAN checks to the places where DMA commands are issued and
check that they are issued for valid memory ranges.
Need to check if we need similar checks for i2c/spi/virtio/etc.

--
You are receiving this mail because:
You are on the CC list for the bug.

bugzill...@bugzilla.kernel.org

unread,
Sep 19, 2018, 5:03:14 AM9/19/18
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

Dmitry Vyukov (dvy...@google.com) changed:

What |Removed |Added
----------------------------------------------------------------------------
Assignee|mm_sanitizers@kernel-bugs.k |dvy...@google.com
|ernel.org |

bugzill...@kernel.org

unread,
Sep 21, 2024, 5:32:04 PM9/21/24
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

Andrey Konovalov (andre...@gmail.com) changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |andre...@gmail.com

--- Comment #1 from Andrey Konovalov (andre...@gmail.com) ---
There was also a related request from kernel people during LPC: allow
temporarily marking certain ranges of memory to be inaccessible by the kernel,
as only the device is supposed to access those ranges via DMA.

This can technically be already done via kasan_un/poison_pages or other related
functions, but we might want to provide some DMA-specific wrappers. We also
need to document this interface usage use case.

--
You may reply to this email to add a comment.

bugzill...@kernel.org

unread,
Sep 22, 2024, 2:03:52 AM9/22/24
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

Dmitry Vyukov (dvy...@google.com) changed:

What |Removed |Added
----------------------------------------------------------------------------
CC| |ar...@arndb.de,
| |dvy...@google.com

--- Comment #2 from Dmitry Vyukov (dvy...@google.com) ---
+Arnd, do you know where exactly DMA memory needs to be poisoned and unpoisoned
for KASAN? Or do you know who may know?

--
You may reply to this email to add a comment.

bugzill...@kernel.org

unread,
Sep 22, 2024, 5:28:19 AM9/22/24
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

--- Comment #3 from Arnd Bergmann (ar...@arndb.de) ---
The functions that need an annotation are the eight "streaming mapping" ones in
kernel/dma/mapping.c:

dma_map_page_attrs()
dma_unmap_page_attrs()
__dma_map_sg_attrs()
dma_unmap_sg_attrs()
dma_sync_single_for_cpu()
dma_sync_single_for_device()
dma_sync_sg_for_cpu()
dma_sync_sg_for_device()

In all cases, the "map" and "_for_device" functions transfer ownership to the
device and would poison the memory, while the "unmap" and "for_cpu" functions
transfer buffer ownership back und need to unpoison the buffers. Any access
from the CPU to the data between the calls is a bug, and so is leaving them
unpaired.

It appears that we already have kmsan hooks in there from 7ade4f10779c ("dma:
kmsan: unpoison DMA mappings"), but I suspect these are wrong because they mix
up the "direction" bits with the ownership and only unpoison but not poison the
buffers.

The size of the poison area should *probably* extend to full cache lines for
short buffers, rounding down the address to ARCH_DMA_MINALIGN and rounding up
size to the next ARCH_DMA_MINALIGN boundary. While unpoisoning, the area should
only cover the actual buffer that the DMA was written to in the DMA_FROM_DEVICE
and DMA_BIDIRECTIONAL cases, any unaligned data around that buffer are
technically undefined after the DMA has completed, so it makes sense to treat
them as still poisoned. For DMA_TO_DEVICE transfers, the partial cache lines
around the buffer remain valid after the transfer, but writing to those while
DMA is ongoing corrupts the data inside the buffer as seen by the device.

--
You may reply to this email to add a comment.

bugzill...@kernel.org

unread,
Sep 23, 2024, 5:01:09 AM9/23/24
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

--- Comment #4 from Dmitry Vyukov (dvy...@google.com) ---
Thanks Arnd, this is useful.

I see that dma_sync_single_for_device accepts dma_addr_t and the comment says:

/*
* A dma_addr_t can hold any valid DMA address, i.e., any address returned
* by the DMA API.
*
* If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32
* bits wide. Bus addresses, e.g., PCI BARs, may be wider than 32 bits,
* but drivers do memory-mapped I/O to ioremapped kernel virtual addresses,
* so they don't care about the size of the actual bus addresses.
*/
typedef u64 dma_addr_t;

So these are not the actual physical/virtual addresses that the kernel itself
will use for that memory, right? This looks problematic b/c we need to
poison/unpoison the kernel physical/virtual addresses, right?

I see that "sg" versions accept scatterlist and scatterlist has the kernel
address info IIUC (it looks like page_link is struct page* with mangled low
bits, so at least we can infer physical addresses for these):

struct scatterlist {
unsigned long page_link;
unsigned int offset;
unsigned int length;

--
You may reply to this email to add a comment.

bugzill...@kernel.org

unread,
Sep 23, 2024, 5:29:29 PM9/23/24
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

--- Comment #5 from Arnd Bergmann (ar...@arndb.de) ---
Right, this is annoying, so it can't be done in the top level of the interface
but has to be done separately based on the implementation (direct or iommu). On
most architectures this means dma_direct_sync_single_for_{device,cpu}() and
iommu_dma_sync_single_for_{device,cpu}(), which convert the DMA address into a
physical address in different ways.

On a couple of architectures (alpha, arm, mips, parisc, powerpc, sparc and
x86), there are additional custom dma_map_ops, but most of these don't do KASAN
or (for s390 and um) don't support the dma_mapping interface.

To try things out, it's probably sufficient to ignore the custom operations.

--
You may reply to this email to add a comment.

bugzill...@kernel.org

unread,
Dec 6, 2024, 2:02:09 AM12/6/24
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

--- Comment #6 from Dmitry Vyukov (dvy...@google.com) ---
FTR implementation of this idea in barebox bootloader:
https://lore.kernel.org/all/72ad8ca7-5280-457e...@pengutronix.de/

It also has some details of the DMA API.

--
You may reply to this email to add a comment.

bugzill...@kernel.org

unread,
Dec 6, 2024, 2:07:01 AM12/6/24
to kasa...@googlegroups.com
https://bugzilla.kernel.org/show_bug.cgi?id=198661

--- Comment #7 from Dmitry Vyukov (dvy...@google.com) ---
The patch also implements detection of duplicated syncs, which may be a
performance issue.

--
You may reply to this email to add a comment.

Reply all
Reply to author
Forward
0 new messages