Batch predictions from a Tensorflow Saved model

168 views
Skip to first unread message

Harshit Dwivedi

unread,
Feb 12, 2020, 12:58:04 AM2/12/20
to cloud-vision-discuss
Hi all!
Currently, I'm using the exported Tensorflow model in my code as follows: 

y_pred = self.session.run('scores:0', feed_dict={
                                  'Placeholder:0':  [binary_image]})
# gives the predictions
return y_pred[0]

However, this only processes one image at a time. I was wondering if there's a way for me to be able to batch this process instead of running a new session every time?

P.S. this isn't a TFlite model, instead it's a saved TF model exported with the following option: 

Snipaste_2020-02-12_11-26-51.png
Best,
Harshit Dwivedi

Harshit Dwivedi

unread,
Feb 12, 2020, 1:25:54 AM2/12/20
to cloud-vision-discuss
I just figured how to do so.
Instead of passing one image at a time, I can pass a list of images into my run() method instead: 

# binary_images is a list of images
y_pred = self.session.run('scores:0', feed_dict={
                                  'Placeholder:0': binary_images})

However, there doesn't seem to be a way to tweak the batch size. For example, sometimes the method runs without any issues with a batch size of 8, but other times it will throw an OOM error.

Does anyone know how to effectively handle this?
Best,
Harshit Dwivedi

Monica (Google Cloud Platform)

unread,
Feb 18, 2020, 3:58:40 PM2/18/20
to cloud-vision-discuss
Hello,

To see if the OOM errors are caused by the Model itself, please test performing a batch prediction using the Google served AutoML model you trained, as listed here [1].

If that works, then the OOM is most likely due to your system's memory limit that is running the exported container [2], and you will need to increase your server hardware to resolve the issue.

Else, if the batch predictions to the Google served AutoML model fails [1] due to OOM, then it is recommended to open a Public Issue Tracker [3] to report the issue to the engineering team, as it may be a memory leak in Tensorflow (e.g. these reports [4]).


Reply all
Reply to author
Forward
0 new messages