For example if I send an ethernet frame from Linux to RTOS, the Rpmsg callback for the ethernet endpoint on the RTOS side will not be called before I do printf() call on the RTOS side.
What can cause this?
Who is responsible for executing the RX callbacks for the endpoints?
Thanks,
Gaute
Thanks for your reply.
I tried as you suggested to call virtqueue_notification() from the interrupt handler, but as you already are aware, it uses a mutex. And that mutex is implemented in libmetal as a call to a non-ISR safe FreeRTOS function, so an assertion in FreeRTOS was triggered.
Instead i started a new thread which blocks on a semaphore given by the ipi handler, and then call virtqueue_notification(). This seems to work ok.
On my Zynq7000 system with Linux 4.9 on one core, and FreeRTOS 9.0.1 with lwIP 2.0.3 on the other core I get a ping time of around 1.6 ms, and iperf shows around 1 Mbit/s. I expected a little better performace, but at least it works. I'm not sure what the bottleneck is at the moment...
Another question: On the RTOS side, is rpmsg_send*() functions thread safe, or should I use a mutex around those calls?
How will this behave (FreeRTOS with multiple RPMsg callbacks) on the new RPMsg implementation in OpenAMP?
Ok, so there is no need to call virtqueue_notification() after the IPI handler like I had to do before?
I did this change:
--- apps/machine/zynq7/zynq_a9_rproc.c 2018-10-30 23:00:17.000000000 +0100
+++ test_app/zynq_a9_rproc.c 2018-11-24 16:54:02.623563900 +0100
@@ -22,6 +22,8 @@
#include <metal/irq.h>
#include <platform_info.h>
#include <xil_printf.h>
+#include "FreeRTOS.h"
+#include "task.h"
/* SCUGIC macros */
#define GIC_DIST_SOFTINT 0xF00
@@ -30,6 +34,17 @@
#define GIC_SFI_TRIG_INTID_MASK 0x0000000F
#define GIC_CPU_ID_BASE (1 << 4)
+static TaskHandle_t notify_task_handle = NULL;
+
+static void notify_task(void *parameters)
+{
+ struct remoteproc *rproc = (struct remoteproc *)parameters;
+ for (;;) {
+ ulTaskNotifyTake(pdTRUE, portMAX_DELAY);
+ remoteproc_get_notification(rproc, RSC_NOTIFY_ID_ANY);
+ }
+}
+
static int zynq_a9_proc_irq_handler(int vect_id, void *data)
{
struct remoteproc *rproc = data;
@@ -40,6 +55,11 @@
return METAL_IRQ_NOT_HANDLED;
prproc = rproc->priv;
atomic_flag_clear(&prproc->nokick);
+
+ BaseType_t xHigherPriorityTaskWoken = pdFALSE;
+ vTaskNotifyGiveFromISR(notify_task_handle, &xHigherPriorityTaskWoken);
+ portYIELD_FROM_ISR(xHigherPriorityTaskWoken);
+
return METAL_IRQ_HANDLED;
}
@@ -72,6 +92,9 @@
irq_vect = prproc->irq_notification;
metal_irq_register(irq_vect, zynq_a9_proc_irq_handler, NULL, rproc);
metal_irq_enable(irq_vect);
+
+ xTaskCreate(notify_task, "OpenAMP", 512, rproc, 16, ¬ify_task_handle);
+
xil_printf("Successfully intialize remoteproc.\r\n");
return rproc;
err1:
It seems to work ok, except that it appears to be a race condition with locking of the rdev->lock mutex.
After a few seconds of testing, the application deadlocks while waiting to obtain this mutex.
This is the stacktrace when the application has deadlocked:
Thread #1 57005 (Suspended : Signal : SIGTRAP:Trace/breakpoint trap)
__metal_mutex_acquire() at mutex.h:58 0x3016348c
metal_mutex_acquire() at mutex.h:58 0x30163534
rpmsg_virtio_rx_callback() at rpmsg_virtio.c:405 0x30163e78
virtqueue_notification() at virtqueue.c:569 0x3016a094
rproc_virtio_notified() at remoteproc_virtio.c:308 0x30168470
remoteproc_get_notification() at remoteproc.c:959 0x301676b0
notify_task() at zynq_a9_rproc.c:45 0x30002ebc
Do you have a suggestion on how to fix this?
It's not beeing called from an interrupt handler.
I can manually get out of the deadlock situation by changing the variable with a debugger, but after a 5-10 seconds it will end up in the same deadlock situation.
My application has multiple tasks calling the RPC proxy functions, in addition to one task sending lwIP network data over to Linux side, and receiving network data from a separate Rx callback.
> > > > > > it, send an email to open-amp+unsubscribe@googlegroups.com.
> > > > > > To post to this group, send an email to open-
> > a...@googlegroups.com.
> > > > > > For more options, visit https://groups.google.com/d/optout.
> > > >
> > > > Ok, so there is no need to call virtqueue_notification() after the
> > > > IPI handler like I had to do before?
> > > [Wendy] depends on if you use remoteproc, if you use remoteproc, you
> > > can Call remoteproc_get_notification(). You can take a look at this
> > > implementation
> > > https://github.com/OpenAMP/open-
> > amp/blob/master/apps/machine/zynqmp_r5
> > > /platform_info.c#L233
> > >
> > > Best Regards,
> > > Wendy
> > >
> > > >
> > > > --
> > > > You received this message because you are subscribed to the Google
> > > > Groups "open-amp" group.
> > > > To unsubscribe from this group and stop receiving emails from it,
> > > > send an email to open-amp+unsubscribe@googlegroups.com.
> > email to open-amp+unsubscribe@googlegroups.com.
> > To post to this group, send an email to open...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.It's not beeing called from an interrupt handler.
I can manually get out of the deadlock situation by changing the variable with a debugger, but after a 5-10 seconds it will end up in the same deadlock situation.
My application has multiple tasks calling the RPC proxy functions, in addition to one task sending lwIP network data over to Linux side, and receiving network data from a separate Rx callback.
> > > > > > it, send an email to open-amp+u...@googlegroups.com.
> > > > > > To post to this group, send an email to open-
> > a...@googlegroups.com.
> > > > > > For more options, visit https://groups.google.com/d/optout.
> > > >
> > > > Ok, so there is no need to call virtqueue_notification() after the
> > > > IPI handler like I had to do before?
> > > [Wendy] depends on if you use remoteproc, if you use remoteproc, you
> > > can Call remoteproc_get_notification(). You can take a look at this
> > > implementation
> > > https://github.com/OpenAMP/open-
> > amp/blob/master/apps/machine/zynqmp_r5
> > > /platform_info.c#L233
> > >
> > > Best Regards,
> > > Wendy
> > >
> > > >
> > > > --
> > > > You received this message because you are subscribed to the Google
> > > > Groups "open-amp" group.
> > > > To unsubscribe from this group and stop receiving emails from it,
> > > > send an email to open-amp+u...@googlegroups.com.
> > email to open-amp+u...@googlegroups.com.
> > To post to this group, send an email to open...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.It's not beeing called from an interrupt handler.
I can manually get out of the deadlock situation by changing the variable with a debugger, but after a 5-10 seconds it will end up in the same deadlock situation.
My application has multiple tasks calling the RPC proxy functions, in addition to one task sending lwIP network data over to Linux side, and receiving network data from a separate Rx callback.
Is it possible to share your code?Just I am busy with some tasks, if you can provide the code, I can try it as soon as I have done with my urgent tasks.Thanks,Wendy
--
You received this message because you are subscribed to a topic in the Google Groups "open-amp" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/open-amp/PwJXw0Bdjrk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to open-amp+u...@googlegroups.com.
To post to this group, send email to open...@googlegroups.com.
I came across the same problem. Setup the same mutex around:
remoteproc_get_notification
and every
rpmsg_send