Hi everyone,
I hope you’re all doing well! I'm currently working on an interactive film project called Light Years Apart, which uses the MediaPipe Web API to process biometric data, including audience facial recognition and landmarks. You can check out more about it at www.lya-movie.com.
I have a couple of questions regarding the privacy policy:
If anyone has insights or knows how to get in touch with the MediaPipe team for more detailed info, I’d really appreciate it!
Thanks so much!
Heidi
Hi Heidi!
For information on the specific models and how they were trained, the model cards would probably be the best reference available. These are linked to in the "models" section of the task overview. For example, if you wanted more information on our BlazePose GHUM 3D pose landmark detection model, the pose landmarker model information section would be here in our documentation, which links to this model card for that particular model. Our published geographic evaluation results and fairness criteria/metrics/results can be found therein.
And yes, on-device processing continues to be a core feature of our web Tasks APIs, so all machine learning is performed directly in the browser, meaning that there is no server involvement in the machine learning process for applications like facial recognition and landmark analysis.