AMP shared memory configuration

657 views
Skip to first unread message

Ed Wingate

unread,
Dec 7, 2017, 4:53:44 PM12/7/17
to open-amp
I have a Zynq-7000 system running Linux on CPU0 and an RTOS on CPU1 and they are communicating via OpenAMP/rpmsg.  If I want to transfer a very large block of data (say upwards of 4MB), can I designate a block of shared memory to be writable/readable by both CPUs so that I only have to transfer the address pointer via rpmsg?  It would be like the existing trace buffer mechanism, but instead of reading via a device file on Linux, I'd like to just directly read a physical/virtual memory address from Linux.  Is this possible?  How would the shared block of memory be configured?  Using the remote resource table or something else?  Thank you for any help or suggestions.

Jiaying Liang

unread,
Dec 7, 2017, 5:18:42 PM12/7/17
to open...@googlegroups.com

Hi Ed,

 

[Wendy] It is possible, but you will need to have a driver in kernel space to export the DMA memory to userspace. you can take a look at the UIO uio_dmem_genirq.c

You can use that driver with some customisation, or you can have your RPMsgremoteproc kernel driver to export the DMA shared memory directly or register a UIO dmem device.

This is on the Linux side.

 

On the remote side, you will need to map those memory as non-cacheable memory from initialization.

 

At the moment, resource table doesn’t cover this type of memory.

 

Best Regards,

Wendy

--
You received this message because you are subscribed to the Google Groups "open-amp" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-amp+u...@googlegroups.com.
To post to this group, send email to open...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ed Wingate

unread,
Dec 7, 2017, 8:08:10 PM12/7/17
to open-amp
Thank you for the reply, Wendy.

I took a look at the uio dmem driver.  Is there any reason why the driver couldn't be used as is?

If my 4MB shared mem buffer is at 0x1F00_0000 - 0x1F40_0000, and I put this in my device tree:
 
    dmabuf1 {
        compatible = "dmem-uio";
        reg = < 0x1F000000 0x400000 >;
    };

Would this create a /dev/uioX device that I can then mmap and access the shared memory buffer?

It's late in the day now and I'll give it a try tomorrow, but just wanted to know why you thought customization of the uio dmem driver is needed instead of just using it out of the box.

Thanks,
Ed

Jiaying Liang

unread,
Dec 7, 2017, 8:41:36 PM12/7/17
to open...@googlegroups.com

HI Ed,

 

But if you just do that mapping in the DTS, the memory is mapped as device memory, I think you will need to use some UIO dmem platform property to specify your DMA memory region. I am not sure if you can do that with DTS.

 

Best Regards,

Wendy

Ed Wingate

unread,
Dec 8, 2017, 5:01:53 PM12/8/17
to open-amp
On Thursday, 7 December 2017 17:41:36 UTC-8, Jiaying Liang wrote:

HI Ed,

 

But if you just do that mapping in the DTS, the memory is mapped as device memory, I think you will need to use some UIO dmem platform property to specify your DMA memory region. I am not sure if you can do that with DTS.


Hi Wendy,

I haven't been able to try uio dmem driver as-is yet.  Been having trouble getting uio modules into my Yocto images.  But if I were to customize uio_dmem_genirq.c or make my rpmsg remoteproc driver export DMA memory, I really wouldn't know what to do differently than what is already in uio_dmem_genirq.c and so it would be doing pretty much the same thing as that driver (map as device memory). 

What would I have to do differently to map a DMA block that is in DDR RAM so that it is not device memory?

Also, how is "export DMA memory" different from "register a UIO dmem device"?  I know the latter would create a /dev/uioX device; how would the former be exposed/used?

Can you point me to examples of exporting DMA memory that I can use in my rpmsg remotproc driver?

Thanks,
Ed

Jiaying Liang

unread,
Dec 11, 2017, 1:36:59 AM12/11/17
to open...@googlegroups.com

HI Ed,

 

From: open...@googlegroups.com [mailto:open...@googlegroups.com] On Behalf Of Ed Wingate
Sent: Friday, December 08, 2017 2:02 PM
To: open-amp <open...@googlegroups.com>
Subject: Re: [open-amp] AMP shared memory configuration

 

On Thursday, 7 December 2017 17:41:36 UTC-8, Jiaying Liang wrote:

HI Ed,

 

But if you just do that mapping in the DTS, the memory is mapped as device memory, I think you will need to use some UIO dmem platform property to specify your DMA memory region. I am not sure if you can do that with DTS.

 

Hi Wendy,

 

I haven't been able to try uio dmem driver as-is yet.  Been having trouble getting uio modules into my Yocto images.  But if I were to customize uio_dmem_genirq.c or make my rpmsg remoteproc driver export DMA memory, I really wouldn't know what to do differently than what is already in uio_dmem_genirq.c and so it would be doing pretty much the same thing as that driver (map as device memory). 

 

What would I have to do differently to map a DMA block that is in DDR RAM so that it is not device memory?

[Wendy] DMA block it needs to be mapped as normal memory instead, in uio_dmem, when you open the UIO device, it will allocate the DMA coherent memory, you can you can check the UIO sysfs directory mapping file to get the DMA address .

 

Also, how is "export DMA memory" different from "register a UIO dmem device"?  I know the latter would create a /dev/uioX device; how would the former be exposed/used?

[Wendy] register a UIO dmem will not work, you will need to set the platform parameter “num_dynamic_regions”.

 

Can you point me to examples of exporting DMA memory that I can use in my rpmsg remotproc driver?

[Wendy] you can check the mmap function in uio.c, to see how the mmap() is implemented. Uio_dmem_genirq.c shows how to allocate the coherent DMA memory from the driver, you can also consider DMA buffer: https://www.kernel.org/doc/html/v4.12/media/uapi/v4l/dmabuf.html

I haven’t used DMA buffer, you should be able to find some kernel document about it.

 

Best Regards,

Wendy

 

Thanks,

Ed

 

Ed Wingate

unread,
Dec 14, 2017, 3:10:59 PM12/14/17
to open-amp
I still don't have uio_dmem_genirq driver working yet.  Maybe because of this:
https://forums.xilinx.com/t5/Embedded-Linux/Bug-in-uio-dmem-genirq-UIO-driver-support-for-device-tree/td-p/569120

So I'll try the patch next.  But on a whim, I tried using the uio_pdrv_genirq driver and it is actually working.  It creates a /dev/uio0 device that I can mmap and access the DDR memory block of interest (specified via DTS). I've marked the memory block as shareable and non-cacheable in MMU.  And it really does seem to be working well.  I place data into the memory block from CPU1, and Linux is accessing it through /dev/uio0.  What issues am I potentially facing by using uio_pdrv_genirq driver to access DDR RAM like this?


On Sunday, December 10, 2017 at 10:36:59 PM UTC-8, Jiaying Liang wrote:

HI Ed,

 

Also, how is "export DMA memory" different from "register a UIO dmem device"?  I know the latter would create a /dev/uioX device; how would the former be exposed/used?

[Wendy] register a UIO dmem will not work, you will need to set the platform parameter “num_dynamic_regions”.


The patch I linked to above adds support for setting num_dynamic_regions, so maybe this patch will give me what I need?

Thank you for your help.

Ed


Message has been deleted

jim....@gmail.com

unread,
Oct 23, 2018, 4:15:46 PM10/23/18
to open-amp
Hi Ed,

I believe I am trying to do essentially the same thing you were doing in December. I want to be able to reserve a block of memory in the device tree, and then use mmap and related calls to map that physical memory into the address space of my APU application. I'll be using that memory region to share data between the APU and two RPUs both running bare metal.

Can you share how you used the uio_pdrv_genirq to reserve the memory, and then the specific calls you made to shm_open to access that memory?

Thanks,
Jim

jim....@gmail.com

unread,
Oct 23, 2018, 7:40:53 PM10/23/18
to open-amp
Hello again Ed,

I found a solution -- see my other recent messages in this group.

Jim,
Reply all
Reply to author
Forward
0 new messages