From: open...@googlegroups.com [mailto:open...@googlegroups.com]
On Behalf Of Duane Murphy
Sent: Friday, June 15, 2018 5:30 PM
To: open-amp <open...@googlegroups.com>
Subject: [open-amp] Re: Zynq UltraScale+ ZCU102 with both remote processors?
Answering my own question ...
In the case of the ZCU102, the linux driver would not be used. Rather the zynqmp driver would be used which uses the hardware infrastructure between the A53 and R5.
[Wendy] You can either use the RPMsg + remoteproc driver implementation or OpenAMP library (RPMsg, virtio + remoteproc in userspace) implementation for this case.
If you chose to use RPMsg + remoteproc in kernel space implementation, the remoteproc linux kernel driver will load firmware on RPU before you can use RPMsg for connection.
If I wanted to simulate this capability (two remote processors) with linux-to-linux mode, would expanding the ipi_table and rproc_table in platform_info.c be the right direction?
[Wendy] you can take a look at: https://github.com/OpenAMP/open-amp/blob/master/apps/machine/zynqmp /platform_info.c you will need to have two IPI information (two IPI UIO devices, and two vring UIO devices, and two shared buffer UIO devices for different RPUs.
Best Regards,
Wendy
On Friday, June 15, 2018 at 5:19:17 PM UTC-7, Duane Murphy wrote:
Can OpenAMP be used to connect to two remote processors at the same time?
The ZCU102 has two R5 processors and a 4 A53 cores. We would like to run linux on the A53 cores and connect and run remote processes on the both R5 cores.
Please correct my description as I may be off in the weeds or missed an important part.
The linux OpenAMP implementation uses sockets to communicate to the application. The sockets are connected to by the first remote process. How would the second remote process connect?
My work so far has been with linux-to-linux which seems to have this limitation. If I want to simulate two remote processors as two remote applications I would need two more pairs of sockets. For example:
{ "unixs:/tmp/openamp.event.0", -1, NULL, 0 },
{ "unixs:/tmp/openamp.event.1", -1, NULL, 0 },
{ "unix:/tmp/openamp.event.0", -1, NULL, 0 },
{ "unix:/tmp/openamp.event.1", -1, NULL, 0 },
would connect the master and one remote application, while
{ "unixs:/tmp/openamp.event.2", -1, NULL, 0 },
{ "unixs:/tmp/openamp.event.3", -1, NULL, 0 },
{ "unix:/tmp/openamp.event.2", -1, NULL, 0 },
{ "unix:/tmp/openamp.event.3", -1, NULL, 0 },
would be used to connect the master and the second remote application.
How does this work with multiple remote cores with the ZCU102?
How can I make remote proc connections to both R5 processors?
Thanks,
...Duane Murphy
--
You received this message because you are subscribed to the Google Groups "open-amp" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
open-amp+u...@googlegroups.com.
To post to this group, send email to open...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Duane
From: open...@googlegroups.com [mailto:open...@googlegroups.com] On Behalf Of Duane Murphy
Sent: Friday, June 15, 2018 5:30 PM
To: open-amp <open...@googlegroups.com>
Subject: [open-amp] Re: Zynq UltraScale+ ZCU102 with both remote processors?
Answering my own question ...
In the case of the ZCU102, the linux driver would not be used. Rather the zynqmp driver would be used which uses the hardware infrastructure between the A53 and R5.
[Wendy] You can either use the RPMsg + remoteproc driver implementation or OpenAMP library (RPMsg, virtio + remoteproc in userspace) implementation for this case.
If you chose to use RPMsg + remoteproc in kernel space implementation, the remoteproc linux kernel driver will load firmware on RPU before you can use RPMsg for connection.
If I wanted to simulate this capability (two remote processors) with linux-to-linux mode, would expanding the ipi_table and rproc_table in platform_info.c be the right direction?
[Wendy] you can take a look at: https://github.com/OpenAMP/open-amp/blob/master/apps/machine/zynqmp /platform_info.c you will need to have two IPI information (two IPI UIO devices, and two vring UIO devices, and two shared buffer UIO devices for different RPUs.