Introduction to TF Lite and on-device machine learning
Demo: Build an Android app to recognize hand-written digits using MNIST dataset
How to convert a TF model to TF Lite
Leverage hardware accelerators to speed up model inference
Optimize TF Lite models with quantization and pruning
Create TF Lite models easily with TF Lite Model Maker
Brief introduction to other Google products relevant to on-device ML: Coral, ML Kit, Cloud AutoML Edge
Choose the mobile optimized architecture for your use case
Quantization: which technique should I use?
The benefits of pruning
Combining quantization and delegates for faster inference
Benchmarking and profiling model inference
Leveraging Firebase for better model deployment
Model metadata for easy model sharing and integration to mobile apps
--
You received this message because you are subscribed to the Google Groups "ML on Mobile OS Working Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlwg-mobile...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mlwg-mobile/CAMDTK-48%2Bwd5nLqjDFOxP0rFQQak%3D_Kh16jE4%2BPb_%2B9Zt0jDMA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
Hi,Is the recording available on Youtube or any platform?Thank you
--
You received this message because you are subscribed to the Google Groups "ML on Mobile OS Working Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlwg-mobile...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mlwg-mobile/e4a4d628-89a1-45c7-a763-716b752d8226n%40googlegroups.com.
Below is the information +Biswajeet Mallik shared as part of TF community training. :-)
To view this discussion on the web visit https://groups.google.com/d/msgid/mlwg-mobile/CAGa_XGGQo6ZzKUe6%2BcDF_4BMTcCdDqg0pOGv3Rk60L1QVPie%3Dw%40mail.gmail.com.