Slow Training with MemoryData Layer

138 views
Skip to first unread message

İlker Kesen

unread,
Sep 28, 2016, 11:22:59 AM9/28/16
to Caffe Users
Hi all,

As far as I've understand, users are able to load desired portion of data to GPU memory through MemoryData layer. However, training takes longer with MemoryData with comparison to LMDB Data layer. My experiments are based on LeNet MNIST example distributed with Caffe and I am using PyCaffe also. I provide my additions below.

Related section of the Python script:
caffe.set_mode_gpu()
caffe.set_device(0)

solver = caffe.SGDSolver('lenet_solver.prototxt')
solver.net.set_input_arrays(xtrn, ytrn) # dtype=np.float32
solver.solve()

Replacements of Data layers in examples/mnist/lenet_train_test.prototxt:
layer {
  type: "MemoryData"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  memory_data_param {
    batch_size: 100
    channels: 1
    height: 28
    width: 28
  }
}
# same for TEST data, but I am not using it though.

Finally, this is my lenet_solver.prototxt:
net: "lenet_train_test.prototxt"
base_lr: 0.1
lr_policy: "fixed"
display: 0
random_seed: 1
max_iter: 600
snapshot_after_train: false
solver_mode: GPU

Orijinal example with above solver configuration takes 3.55 sec and with MemoryData layer it takes 3.62 sec. Where is the mistake? How can I speed up the training process? Or is just because of using Python instead of C++ (I will try that soon also)?

Thanks in advance.

İlker Kesen

unread,
Sep 29, 2016, 4:47:12 PM9/29/16
to Caffe Users
A further question, is this MemoryData the fastest way to use data in Caffe?

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/8a7b3c6b-3607-4e8e-aa14-50f440a6dcd6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--

İlker Kesen

unread,
Sep 30, 2016, 10:44:29 AM9/30/16
to Caffe Users
I have just realized that Caffe (with Python) uses too much memory! Is that because of usage of numpy arrays? I am working with MNIST data as Float32 arrays. This means (785 * 60000 * 4) / (1024 * 1024) ~= 180 MB memory. However, when I try to load use all MNIST data as a single batch, it runs out of memory (more than 10 GB!). I have seen that, a single batch (size=100) takes ~23 MB where it was supposed to be less than 1 MB according to my calculations. Though, I know this is the most optimal expectation it cannot be happened in any case, but using 23 MB memory is too inefficient I guess. My Torch implementation can handle LeNet MNIST training less than 500 MB and it is able to load all data to GPU. Which action should I take in order to surpass this problem? I just want to load all data to GPU and use minibatches from GPU memory, that's all.
Reply all
Reply to author
Forward
0 new messages