--
You received this message because you are subscribed to the Google Groups "chromium-mojo" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-moj...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-mojo/286d4a37-349d-4b98-a1b6-384255ad32cfn%40chromium.org.
On Mon, Jan 16, 2023 at 7:38 PM 'Andrew Moylan' via chromium-mojo <chromi...@chromium.org> wrote:Hi, expected Mojo IPC latency in practice has been discussed previously eg this thread about 2 years agoI have been using a rule of thumb "up to 1ms ping via Mojo is normal, rare spikes to 10ms" that I garnered from some some hacky experiment.In that thread bgeffon mentioned a variety of improvements along the way.So I am wondering:What's the latest on Mojo IPC latency? Have there been major changes/improvements in the last couple of years?CC +ji...@google.com, nap...@google.com who were recently interested in this question.1 ms would be extremely high for general IPC latency. It's nowhere near that in isolation, but it's also probably not useful to talk about Mojo IPC latency in isolation. In practice our latency in Chromium is dominated by task scheduling latency at the receiver.In the best cases the kernel may context-switch to the receiving IO-thread immediately on send; and then if the targeted endpoint lives on that IO thread, we can dispatch immediately and might see an end-to-end IPC latency as low as 10-20 microseconds.More typically the context switch isn't immediate, so assume some microseconds go by before we hit the IO thread. Also more typically we don't bind Mojo receivers on the IO thread, so the IO thread will only wake for minimal routing work before posting a dispatch task to the appropriate thread. From there it depends on what the target thread is doing.For a busy renderer's main thread, dispatch task latency can range from a few microseconds to a few seconds.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-mojo/CA%2BapAgF086Qgjsrzdrzz0EpA_ipaG%3DvSrSVgehi2e-5Ff16HiA%40mail.gmail.com.
On Mon, Jan 16, 2023 at 7:38 PM 'Andrew Moylan' via chromium-mojo <chromi...@chromium.org> wrote:Hi, expected Mojo IPC latency in practice has been discussed previously eg this thread about 2 years agoI have been using a rule of thumb "up to 1ms ping via Mojo is normal, rare spikes to 10ms" that I garnered from some some hacky experiment.In that thread bgeffon mentioned a variety of improvements along the way.So I am wondering:What's the latest on Mojo IPC latency? Have there been major changes/improvements in the last couple of years?CC +ji...@google.com, nap...@google.com who were recently interested in this question.1 ms would be extremely high for general IPC latency. It's nowhere near that in isolation, but it's also probably not useful to talk about Mojo IPC latency in isolation. In practice our latency in Chromium is dominated by task scheduling latency at the receiver.
I assume we are talking about the priority on the task runner: crsrc.org/c/base/task/task_traits.h;l=33@Ken Rockot Correct me if I am wrong!
@Jing Wang I am guessing the existing code that handles touchscreen events is already running on a task runner priority higher than "USER_VISIBLE"?
My adhoc experiments a couple of years ago (the ones that left me expecting 100-1000us normal latency) were for ping/pong messages on an eve device between chromium UI sequence and ML Service daemon.I didn't experiment with higher taskrunner priorities, would be interesting.
Hi Ken,A bit more context here: we are considering adding a new module/task in this class (https://source.chromium.org/chromium/chromium/src/+/main:ui/events/ozone/evdev/touch_event_converter_evdev.h;drc=71630a0e336a703e21de9ebeb98a5abf84e8c96c;l=43), it will receive a new data stream from the touch screen and we want to run some neural network model on the data we received to improve palm rejection. To run the NN model, one of the approaches (might be the easiest to implement) is to use the existing ML service via mojo ipc. Since it's about touch experience, we care about the latency, according to your previous emails, it seems reasonable to expect the total extra latency for calling ML service mojo (excluding the actual model inference time) to be under 100us in most cases and under 1ms in extreme case, which we think satisfies our needs. Please correct me if I'm wrong.