How To Stride Crop an Image Efficiently?

194 views
Skip to first unread message

Larry Lindsey

unread,
Jun 25, 2018, 4:09:10 PM6/25/18
to TensorFlow.js Discussion
Hi,

Quick background: I'm trying to build a texture classifier using transfer learning with MobileNet as a feature network. Basically, for a given image of dimensions say 512 x 512, I want to crop a series of 224 x 224 square at highly overlapping intervals to produce a tensor of shape, say [N 224 224 3], where my batch is of size N, each crop is 224 pixels in width and height, and there are 3 channels for RGB. The crops would be taken from the image origin of x ,y = [0, 0], [1, 0], [2, 0], ..., [1, 1], [2, 1], [2, 2], ... and so on, effectively sampling a field-of-view densely over the original image.

Letting N be 100, if I create 100 crops and load them as tensors of shape [1 224 224 3], I can tf.concat them to get to my desired batch size, but this operation alone takes > 2000ms on the workstation I'm using. For comparison, MobileNet inference on the same batch takes about 50ms, so it doesn't appear to be an issue of machine slowness. Is there a more efficient way to do this?

Cheers,

Larry

Nikhil Thorat

unread,
Jun 25, 2018, 6:41:34 PM6/25/18
to lfli...@google.com, TensorFlow.js Discussion
Is the slicing or the concating the slow piece?

Unfortunately, concatenating all of those will cause 100 programs to be run to concat, with an increasing amount of memory being allocated.

Is this part of your training pipeline, or is this for inference? Are you running this in a browser or in Node? Have you measured the time it takes the *second* call to doing this, which will reuse a lot of memory which got allocated during the first call?

--
You received this message because you are subscribed to the Google Groups "TensorFlow.js Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tfjs+uns...@tensorflow.org.
Visit this group at https://groups.google.com/a/tensorflow.org/group/tfjs/.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/tfjs/3c163d5f-58dd-437d-b507-c48106b85c0a%40tensorflow.org.

Larry Lindsey

unread,
Jun 25, 2018, 7:09:38 PM6/25/18
to TensorFlow.js Discussion, lfli...@google.com
Concatenating is what is slow. It makes sense that it would be, if it spawns 100 different programs, each alloc'ing and de-alloc'ing memory. This is part of in-browser inference. The second call takes approximately the same amount of time (maybe ~100ms faster).
Reply all
Reply to author
Forward
0 new messages