Running MediaPipe Pose Landmarker in iOS

32 views
Skip to first unread message

Vasil Poposki

unread,
Mar 28, 2024, 3:27:08 AMMar 28
to MediaPipe

In my iOS application in React Native, I'm trying to set up the MediaPipe pose landmarker model. Since there's no direct guide for the iOS platform here developers.google.com/mediapipe/solutions/vision/pose_landmarker, I extracted pose_detector.tflite and pose_landmarks_detector.tflite models from pose_landmarker_full.task. Then, I used the vision-camera-fast-tflite library to run pose_landmarks_detector.tflite.

I managed to get some data. It looks like the model can detect some changes in movement, but the output landmarks are mostly totally incorrect. Any ideas why?

I went through guides for multiple platforms and also tried to examine the MediaPipe framework written in Python to see if some preprocessing or postprocessing is going on. However, I still don't have any reasonable explanation of why the model output is incorrect.

img_20240328_081821.jpg

Reply all
Reply to author
Forward
0 new messages