Strange behaviour of "'TensorFlow Lite Example On-device Model Personalization" demo app

63 views
Skip to first unread message

Jerry Y

unread,
May 12, 2022, 11:43:19 AM5/12/22
to TensorFlow Lite
I tested the TensorFlow Lite Example On-device Model Personalization demo app (https://github.com/tensorflow/examples/tree/master/lite/examples/model_personalization), and found this strange behaviour. 

When training the TFLite model on-device, the model converges on first batch of data. If new classes of data are added to the next batch, the model won't converge and can't infer new classes. 

For example, I add samples of three classes and train the model. The model converges with the loss << 0. Then I add samples of the fourth class and train it. The model won't converge with the loss >> 0 (like 8 or 12). As a result, the model con't identify the fourth class of images. 

Is the behaviour a problem? Can we fix it? Please help. Thanks. 

Haoliang Zhang

unread,
May 12, 2022, 12:36:56 PM5/12/22
to Jerry Y, Jared Lim, TensorFlow Lite
+Jared Lim This sounds really weird. Does this happen whenever you do one batch of training, and add another batch of training data (with new labels)?

--
You received this message because you are subscribed to the Google Groups "TensorFlow Lite" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tflite+un...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tflite/f03d29a8-c4b6-44bb-9f9f-b42c6fdd1d68n%40tensorflow.org.


--
Best,
Haoliang

Jerry Y

unread,
May 12, 2022, 1:02:34 PM5/12/22
to TensorFlow Lite, Haoliang Zhang, TensorFlow Lite, Jerry Y, jare...@google.com
Yes. This happens whenever new labels(classes) are added to the second or later batches. Without new labels, more data of previous labels are ok to add and the training converges with loss < 0.

Haoliang Zhang

unread,
May 12, 2022, 5:02:30 PM5/12/22
to Jerry Y, TensorFlow Lite, jare...@google.com
Thanks for the context. Could you provide step-by-step instructions for us to reproduce your issue? If possible you could also share your training examples with us so we can work on it.
--
Best,
Haoliang

Jerry Y

unread,
May 12, 2022, 6:27:41 PM5/12/22
to TensorFlow Lite, Haoliang Zhang, TensorFlow Lite, jare...@google.com, Jerry Y
Steps:
1. Follow the instructions at https://github.com/tensorflow/examples/tree/master/lite/examples/model_personalization, and build and run the demo app on an Android 11 phone.
2. Add 7 samples for each of the first three labels (three different small objects on a white paper), total 21 samples. Train the model. The model converges with loss << 0. The model can identify the three labels of objects.
3. Save the trained weights to a checkpoint file. Close the app.
4. Open the app again, load the saved weights into the model.
5. Add 7 samples for each of the second and third labels (using the objects in step 2). Add 7 samples for the fourth label (fourth small object on a white paper). Total 21 samples. Train the model. The model can not converge as loss >> 0. The model can not identify the fourth object.

Please help to fix the problem. Thanks!

Jared Lim

unread,
May 13, 2022, 5:46:38 PM5/13/22
to Jerry Y, TensorFlow Lite, Haoliang Zhang
Hi Jerry,

Thanks for the detailed instruction. Based on your description, the total numbers of samples collected for each of categories A,B,C, and D are:
A: 7, B: 14, C: 14, D: 7
and the model weights are already fitted to the first three categories during the initial training (note that the Android code continuously trains the model with the same dataset several times until the pause button is clicked, so the model weight might be overly trained to certain categories). So the model might have a hard time converging when a newly introduced category is much less trained with and also the number of samples is one of the least (i.e. 7 samples).

Here are some suggestions we want to provide:
1. Increasing the number of fourth category samples.
2. Tweaking the batch size in case it helps with loss convergence in the demo code.
3. Worst case, re-initialize the model when a new category is introduced and start over the training.

Hope this helps with your case.

Best regards,
Jared Lim

Jerry Y

unread,
May 14, 2022, 3:30:35 PM5/14/22
to TensorFlow Lite, jare...@google.com, TensorFlow Lite, Haoliang Zhang, Jerry Y
Hi Jared,

Thank you for your suggestions! 
Have tried your suggestions #1 and #2. The loss is still big and does not go down. Will try #3 later.

My testing of the demo app suggests that the On-Device Transfer Model Personalization can support at most 20 labels/classes, because the batch size is 20. Any more labels introduced in the second and later batches will cause the model to un-converge.  

Is this true? If not, how many labels/classes at most can the Transfer Model Personalization support? Do you have any study on the capacity of On-Device Transfer Model Personalization? Can the On-Device Transfer Model Personalization really be used in real-world business applications? 

Best regards,
Jerry Y.

Jared Lim

unread,
May 18, 2022, 2:00:46 PM5/18/22
to Jerry Y, TensorFlow Lite, Haoliang Zhang
Hi Jerry,

We don't have any formal study specifically on the capacity of the On-Device Transfer Learning Model, but theoretically there shouldn't be a limit to the number of labels. The batch size of 20 was set as an adequate size for reasonable convergence in a reasonable amount of time for the demo purpose, but please feel free to tweak the batch size parameter as well when you test out your app and experiments.

Transfer learning is an ML problem that focuses on learning a set of knowledge from one problem and transferring that knowledge to solve another similar but different problem (e.g. learning image classification from dog breed images and then later use that knowledge to classify cat breeds. This is useful when you don't have a lot of cat images but have a lot of dog images, and still want to develop an ML model that can classify cats). The On-Device Transfer Learning model is an example of transfer learning concept applied to a real-world application, using the TensorFlow Lite framework. The concept of transfer learning is and can be used in real-world business applications.

Best regards,
Jared Lim

Reply all
Reply to author
Forward
0 new messages