https://bugzilla.kernel.org/show_bug.cgi?id=198661
--- Comment #3 from Arnd Bergmann (
ar...@arndb.de) ---
The functions that need an annotation are the eight "streaming mapping" ones in
kernel/dma/mapping.c:
dma_map_page_attrs()
dma_unmap_page_attrs()
__dma_map_sg_attrs()
dma_unmap_sg_attrs()
dma_sync_single_for_cpu()
dma_sync_single_for_device()
dma_sync_sg_for_cpu()
dma_sync_sg_for_device()
In all cases, the "map" and "_for_device" functions transfer ownership to the
device and would poison the memory, while the "unmap" and "for_cpu" functions
transfer buffer ownership back und need to unpoison the buffers. Any access
from the CPU to the data between the calls is a bug, and so is leaving them
unpaired.
It appears that we already have kmsan hooks in there from 7ade4f10779c ("dma:
kmsan: unpoison DMA mappings"), but I suspect these are wrong because they mix
up the "direction" bits with the ownership and only unpoison but not poison the
buffers.
The size of the poison area should *probably* extend to full cache lines for
short buffers, rounding down the address to ARCH_DMA_MINALIGN and rounding up
size to the next ARCH_DMA_MINALIGN boundary. While unpoisoning, the area should
only cover the actual buffer that the DMA was written to in the DMA_FROM_DEVICE
and DMA_BIDIRECTIONAL cases, any unaligned data around that buffer are
technically undefined after the DMA has completed, so it makes sense to treat
them as still poisoned. For DMA_TO_DEVICE transfers, the partial cache lines
around the buffer remain valid after the transfer, but writing to those while
DMA is ongoing corrupts the data inside the buffer as seen by the device.
--
You may reply to this email to add a comment.