The M
ediaPipe Framework is the lower layer upon which the
Solutions tasks are built. Documentation is here:
https://ai.google.dev/edge/mediapipe/frameworkCalculators are the nodes that make the actual calculations and are the ones that benefit from parallelism.
There are different techniques for parallelisation that would benefit these kind of elements in general, some that come to mind are:
- using vectorisation instructions specific to the microprocessor
- using libraries like openMP that parallelise functions using thread pools
- using threads manually like you suggested
- using dedicated hardware, like GPUs (see CUDA) or custom chips for AI
As per my understanding, common
calculators used are neural networks based on
Tensorflow, and these are implemented with parallelisation in mind from the start. This is the card for the pose landmarker used in their example, have you tried to use a lighter model? I see they have three versions.
https://storage.googleapis.com/mediapipe-assets/Model%20Card%20BlazePose%20GHUM%203D.pdfRegards,
Andrea
I use the Kotlin interface, and I do not have any profiling analysis, it’s just that I have seen good efficiency gains in other parts of my application from using channels and parallelisms.
I can’t seem to find any docs on the internals of the Mediapipe library, so what do you base your opinion of the “highly optimized for parallelism” on? Would you happen to have any references?
Regards,
Oddbjørn
Hi,
just as curiosity, is your app implementation using the Java/Kotlin interface or are you working fully on the Native Layer?
In my opinion, the internal pipeline is surely already highly optimised for parallelism, so I don't think you would get any advantage from your strategy. Do you have any profiling analysis that suggests the opposite?
Regards