ML Kit custom model file size limit

835 views
Skip to first unread message

Will Battel

unread,
Mar 17, 2019, 10:17:40 AM3/17/19
to Firebase Google Group
We have an object detection TFLite model that we want to use with ML Kit. Unfortunately, it seems that Firebase limits models to 40MB. Our model is ~240MB, which is large but by no means extreme for these types of models. Is there no way for us to use it on Firebase?

Thanks.

Sachin Kotwani

unread,
Mar 18, 2019, 7:46:06 AM3/18/19
to fireba...@googlegroups.com
Hi Will,

You can still bundle the model locally with the app, but it's too big to host it with ML Kit. By bundling it alone you lose the capability to dynamically swap it or run experiments, but can still use the custom SDK in ML Kit to run inferences. The model will also be available immediately at install, of course.

I'd love to hear more about your use case if you wouldn't mind sharing. We don't usually see many requests for on-device models that are larger than the supported limit.

Thanks,

Sachin
PM for ML Kit

On Sun, Mar 17, 2019, 10:17 AM Will Battel <willb...@gmail.com> wrote:
We have an object detection TFLite model that we want to use with ML Kit. Unfortunately, it seems that Firebase limits models to 40MB. Our model is ~240MB, which is large but by no means extreme for these types of models. Is there no way for us to use it on Firebase?

Thanks.

--
You received this message because you are subscribed to the Google Groups "Firebase Google Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firebase-tal...@googlegroups.com.
To post to this group, send email to fireba...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/firebase-talk/57dd243d-11d0-466d-b59e-88877be6a0de%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Will Battel

unread,
Mar 18, 2019, 3:14:06 PM3/18/19
to Firebase Google Group
Thanks for the info, Sachin.

We have an app on Firebase that uses these models for real-time object detection from the device's camera. We're using re-trained YOLOv3 models. On older devices, we use Tiny YOLO for significantly faster inference time. On newer devices, such as those now coming with special neural chips like the new iPhones, we use the full YOLO model. From our testing, these newer phones are able to run these large models really fast.

The objects our app is detecting are pretty small. YOLOv3 works well for this because it runs the input image at several scales that are good at finding especially large/small objects. Tiny YOLO is good at this, but full YOLO is great at this. For phones with more powerful hardware, we want to use the full model.

Here is our ideal use-case we want to use ML Kit for:
- We ship Tiny YOLO with the app binary. Tiny YOLO is ~35MB.
- On first app session, we use Tiny YOLO to benchmark inference time.
- We have a target for FPS that we want our users' devices to run at. If the FPS on Tiny YOLO far exceeds this target, i.e., the user has a new phone with powerful GPU or neural processor, and they opt-in to the large download, then we download the full YOLO model (~240MB) and switch to that. Otherwise, we continue using Tiny YOLO for inference.

For more details specific to our models, I'd be happy to talk offline with you.

Thanks,
Will
Reply all
Reply to author
Forward
0 new messages