Question 1: does X server (running under Linux), allocate the composition
buffers in an usual way (using malloc() or similar), or does it do anything
fancier in order to make sure the physical address of composition buffers is
stable in time ?
Question 2: Does ever X server schedule a DMA transaction (to graphics card
presumably) from a composition buffer ? (in such case it would make sense to
pin its physical pages).
The Qubes OS Project
I'm assuming by "composition buffer" you mean the thing you're actually
scanning out on the display. I'll use the word "framebuffer" for that,
since that's the usual X name for it.
In the absence of an accelerated driver, X doesn't care about the
physical address of the framebuffer at runtime. Older versions of X
would mmap /dev/mem starting at the physical address of the framebuffer
(as gleaned from the PCI config registers), but after that point we just
write into it by virtual address. Newer ones do basically the same
thing but on the resource file in sysfs for the PCI device, with offset
If you have an accelerated driver, then it depends on the particular
driver. Userspace-only drivers may need to know the physical address of
memory to initiate DMA. Kernel-based drivers typically do DMA from the
It's not really clear what you're asking though, or what you're trying
Window pixmaps are like any other pixmap. Where they live is entirely
up to the driver. Unaccelerated drivers keep them in host memory with
malloc(). Accelerated drivers do that sometimes, and then sometimes put
them in video memory. Remember that you can have more video memory than
you can see through a PCI BAR, so you might not be able to address the
pixmap from the CPU at all.
> Briefly, the goal is to get the location of a composition buffer created by
> X server running in virtual machine A, and map it in the address space of
> virtual machine B. Such mapping has to be defined in terms of physical
> addresses; consequently, it is crucial to make sure that the frames backing a
> composition buffer do not change in time.
That's not going to do anything useful without some synchronization
work. Window pixmaps aren't created afresh for each frame. They're
long-lived. If you manage to get a pixmap shared between VMs A and B,
there's nothing to stop A from rendering into it while B is reading from
The way compositing managers handle this is by taking an X server grab
while reading out of the window pixmap, which does prevent other client
rendering from happening. And as soon as you're doing _that_, just
XGetImage the pixels out instead of playing funny MMU games, it'll
probably be faster.
Strictly, it's just a convention. In practice, it points to the pixels,
except when it doesn't. See xf86EnableDisableFBAccess() for the
> > That's not going to do anything useful without some synchronization
> > work. Window pixmaps aren't created afresh for each frame. They're
> > long-lived. If you manage to get a pixmap shared between VMs A and B,
> > there's nothing to stop A from rendering into it while B is reading from
> > it.
> Currently synchronization is done by damage extension events. It seems to
> work: a video player running in A in full screen is correctly displayed in
> B, all other apps work fine as well.
You appear to be using DamageReportRawRectangles, so you'll eventually
be consistent. It looks like, due to how the X server implements it,
you'll end up getting a report for the whole pixmap all the time, as we
never empty the internal damage tracking. Which isn't the most
efficient thing ever, but might not actually matter given how most apps
end up double-buffering updates.
> BTW, it is possible that in case of accelerated driver, XShmPutImage uses
> DMA from the MIT SHM region directly to VGA memory ?
Yes, very possible. The intel and radeon drivers do this, probably