I find it a very unusual coincidence that you have the exact same precision and recall. Could it be a typo?
Regarding the inferences, this issue is strange because the model is already tested automatically on part of its dataset and from the accuracy you get, the prediction should work.
Besides an issue with the model itself, there could be several reasons like how to call to prediction is made or the test images distribution.
Did you make your tests from the Console or by making a call to the API ?
If you called the API, please share the details of your call and if possible an example code snippet.
Are your test images from the same distribution then the model's dataset? (I.E.: Do the images come from the same device type, are the images from the same quality,...)
Note: Please remind to avoid publicly sharing any sensitive identifiable information (projectID, passwords, phone numbers,...)
Regards,