Python layer for data augmentation

943 views
Skip to first unread message

Alexey Abramov

unread,
May 2, 2016, 7:54:36 AM5/2/16
to Caffe Users
Hello everyone,

   I have a question regarding data augmentation using the Caffe library. I wanna implement my own data augmentations, but have no idea what is the best way of doing it. I'm implementing a Python layer for the data augmentation which is supposed to get images from the input layer adding augmentations to them. For AlexNet it looks as follows:

name: "CaffeNet"
layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    mirror: false
    crop_size: 227
    mean_file: "../../data/mydata/imagenet_mean.binaryproto"
  }
  data_param {
    source: "../../examples/mydata/train_lmdb"
    batch_size: 256
    backend: LMDB
  }
}
layer {
  name: "dataaugmentation"
  type: "Python"
  bottom: "data"
  top: "dataaugmentation"
  python_param {
    module: "DataAugmentation"
    layer: "DataAugmentationLayer"
  }
}


Thus the first conv layer is connected to the "dataaugmentation" layer. I can access all images in the batch and apply augmentations to them, but I don't know what is the right way of putting all images together back into the batch... Having 256 images and (let say) 2 operations I will need a batch size of 512, but then I need to extend an already pre-defined batch size. Doing it this way I get a conflict with a loss layer which expects an input of the batch size... Does anybody have an idea what would be the best way to do it? 

Any advice is kindly welcome! Thanks!

Best,
Alexey

Jan

unread,
May 2, 2016, 8:54:21 AM5/2/16
to Caffe Users
The go-to way to do it would be to just randomly apply or not apply a data augmentation to a sample. So don't try to really extend the batch with modified versions of other samples, just modify them in place. Over the long term (doing multiple epochs) the network will see both augmented and non-augmented versions of the sample, so it's ok. That is also the way the built-in augmentation (mirroring) works.

Jan

Alexey Abramov

unread,
May 2, 2016, 9:50:03 AM5/2/16
to Caffe Users
Makes sense to me, thanks!

Alexey Abramov

unread,
May 3, 2016, 9:56:05 AM5/3/16
to Caffe Users
By the way, any idea why an additional Python layer, just right after the input data layer, leads to the higher gpu memory usage?! With such a layer I need to decrease my batch size...


Am Montag, 2. Mai 2016 14:54:21 UTC+2 schrieb Jan:

yxc...@gmail.com

unread,
May 7, 2017, 4:22:21 AM5/7/17
to Caffe Users
Hi have you figured out why?

Jonathan R. Williford

unread,
May 7, 2017, 4:49:20 AM5/7/17
to yxc...@gmail.com, Caffe Users
Any time you add a blob, it will increase the memory usage. The layer introduces the blob "dataaugmentation". I'm guessing "data" is then loaded into the GPU. The data augmentation layer problem then copies the data back to the CPU to perform its augmentation, and then the data get re-copied to the GPU. I'm not sure if this is the entire story, I don't know how much the memory usage is increased by.

You might want to try having the Python layer read in the data. It might be difficult to make the code as efficient as the C++ code though.

You could also try make the computation in-place, although I think you would still have data being copied to the GPU before the augmentation.

Cheers,
Jonathan

On Sun, May 7, 2017 at 10:22 AM, <yxc...@gmail.com> wrote:
Hi have you figured out why?

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users+unsubscribe@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/022b2e32-2b76-4c7c-b73f-81ca433bea27%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages