Object Detection - A complete fail

51 views
Skip to first unread message

Henry Magnuski

unread,
Feb 28, 2021, 1:37:57 PM2/28/21
to cloud-vision-discuss
I trained two models for an object-detection task (hosted and edge) using 3000+ images, and the training in both cases seemed to complete successfully (precision 98.51%, recall 98.51%).

Yet, when I try to test the models with some test images, it's a complete fail. Very disappointing results - no objects detected in an image where there are clearly four or five, a bounding box around multiple objects, etc. Actually, not a single correct inference.

Why would there be such a huge gap between the training results and my test results?

Hank


tielve

unread,
Mar 1, 2021, 6:49:04 AM3/1/21
to cloud-vision-discuss
Hi,

I find it a very unusual coincidence that you have the exact same precision and recall.  Could it be a typo?

Regarding the inferences, this issue is strange because the model is already tested automatically on part of its dataset and from the accuracy you get, the prediction should work.
Besides an issue with the model itself, there could be several reasons like how to call to prediction is made or the test images distribution.
Did you make your tests from the Console or by making a call to the API ?
If you called the API, please share the details of your call and if possible an example code snippet.
Are your test images from the same distribution then the model's dataset? (I.E.: Do the images come from the same device type, are the images from the same quality,...)
Note: Please remind to avoid publicly sharing any sensitive identifiable information (projectID, passwords, phone numbers,...)

Regards,

Henry Magnuski

unread,
Mar 1, 2021, 11:24:49 AM3/1/21
to cloud-vision-discuss
Thank you for your response.

Not a typo, and I noticed the same unusual results. In the first training run (hosted model, 2192 training + validation images) the precision was 91.16% and recall was 89.09%. In the second training run (edge model, 3003 training + validation images) the results were 98.51% precision, 98.51% recall.

There were five classes of objects, and for each object the training + validation images contained only one example object posed in 3 different views x 8 random rotations (i.e. 24 different views of that object).

My inference testing was done via the Google Cloud web page (no API used) with a different set of test images (multiple objects per image in multiple different views). As I stated earlier, on these test images the model completely failed to get any bounding boxes correct and no correct labels. The quality of the test images is the same as the training and validation.

Regards,

Hank

Kevin

unread,
Mar 2, 2021, 4:58:52 AM3/2/21
to cloud-vision-discuss
Hi,

Since this behavior is unexpected, I have created a public and a private issue on issuetracker.
And as Google Groups are reserved for general product discussion, we will continue to troubleshoot your issue on issuetracker.

Thank you
Regards,
Reply all
Reply to author
Forward
0 new messages