MemoryDataLayer in python

434 views
Skip to first unread message

Freerk Venhuizen

unread,
Sep 23, 2015, 3:04:07 PM9/23/15
to Caffe Users
As opposed to using a huge precreated LMDB file, I would like to use the memorydata layer to load my data on-the-fly. 
This allows for realtime data augmentation and many more interesting things.

As a simple experiment I am trying to feed the MNIST data to the memorydata layer in python.
To do this I extract minibatches from the leveldb file in python which I then feed to the network using the memorydata layer.
For testing I still use the LMDB, but for the training phase I load the data using the memorydata layer.

First I import some libraries and set some environment vars:
caffe_root = '/home/diag/caffe-master/' # this file is expected to be in {caffe_root}/examples
import sys
import os
import numpy as np
sys
.path.insert(0, caffe_root + 'python')
os
.chdir('/media/diag/Data/Python_scripts/')
import caffe
import lmdb
from pylab import *
miniBatchsize 
= 100

caffe
.set_device(0)
caffe
.set_mode_gpu()
solver 
= caffe.SGDSolver('mnist_python/lenet_auto_solver.prototxt')


Next I define the function to extract minibatches from the LMDB database:
def getData(it):
    stats 
= env.stat()
    nrEntries 
= stats['entries']
    
begin = it*miniBatchsize % nrEntries
    
end = it*miniBatchsize % nrEntries + miniBatchsize
    ID 
= '{:08}'.format(0)
    raw_datum 
= txn.get(ID)
    datum 
= caffe.proto.caffe_pb2.Datum()
    datum
.ParseFromString(raw_datum)
    channels
, height, width = datum.channels, datum.height, datum.width
   
    imageData
=np.zeros((end-begin,channels,height,width),dtype='float32')
    labels
=np.zeros(((end-begin),1,1,1),dtype='float32')
    count 
= 0
    
for i in range(begin, end):
        ID 
= '{:08}'.format(i)
        raw_datum 
= txn.get(ID)
        datum 
= caffe.proto.caffe_pb2.Datum()
        datum
.ParseFromString(raw_datum)
        flat_x 
= np.fromstring(datum.data, dtype=np.uint8)
        x 
= flat_x.reshape(datum.channels, datum.height, datum.width)
        y 
= datum.label
        imageData
[count,:,:,:] = x
        labels
[count,0,0,0] = y
        count
+=1
    
return imageData, labels
This part is probably not the issue. I have checked if the images are extracted correctly and  it seems the images and labels are correctly added to a 4 dimensional array of size [100,1,28,28] and [100,1,1,1] for the images and the labels respectively.

Then for the actual training of the network I use the following code:
%%time
niter 
= 10000
test_interval 
= 500

train_loss 
= np.zeros(niter)
test_acc 
= np.zeros(int(np.ceil(niter / test_interval)))
output 
= np.zeros((niter, 8, 10))

# the main solver loop
env 
= lmdb.open('/media/diag/Data/Python_scripts/mnist_python/mnist_train_lmdb/', readonly=True)
with env.begin() as txn:
    
for it in range(niter):
        
(imageData, labels) = getData(it)   
        solver
.net.set_input_arrays(imageData, labels)
        solver
.step(1) 
        
        train_loss
[it] = solver.net.blobs['loss'].data    
        solver
.test_nets[0].forward(start='conv1')
        output
[it] = solver.test_nets[0].blobs['ip2'].data[:8]
    
        
if it % test_interval == 0:
            
print 'Iteration', it, 'testing...'
            correct 
= 0
            
for test_it in range(100):
                solver
.test_nets[0].forward()
                correct 
+= sum(solver.test_nets[0].blobs['ip2'].data.argmax(1)
                           
== solver.test_nets[0].blobs['label'].data)
            test_acc
[it // test_interval] = correct / 1e4
        

Here I open the LMDB file, extract a minibatch from it, and input and process it through the network using this function: 
        solver.net.set_input_arrays(imageData, labels)
        solver
.step(1)    


The network starts training without any problem, but the loss on the trainingdata and the accuracy on the validation data,  is not decreasing or increasing respectively (training loss is varying slightly around 2.30, and accuracy is 0.1).

MNIST should comverge rapidly, so I seems something is wrong with my approach.
I've been looking through all the topics about the memorydata layer in python, but none came up with an answer on how to get it to work.

What am I doing wrong?

Freerk Venhuizen

unread,
Sep 25, 2015, 4:23:17 AM9/25/15
to Caffe Users
Something seems to be going wrong when loading the data using the set_input_arrays function.
solver.net.set_input_arrays(imageData,labels)

When I check my the data blob after executing the above function it only contains zeros...
print solver.net.blobs['data'].data[1]
Double checking the input array, it does actually contain the first 100 MNIST examples as a 4D numpy array (100x1x28x28) stored as float32.

For completion the memorylayer definition in my prototext:
layer
{
name
: "data"
type
: "MemoryData"
top
: "data"
top
: "label"
memory_data_param
{
batch_size
: 100
channels
: 1
height
: 28
width
: 28
}
}

And my solver file:

train_net: "mnist_python/lenet_auto_train.prototxt"
test_net
: "mnist_python/lenet_auto_test.prototxt"
test_iter
: 100
test_interval
: 1000000
base_lr
: 0.01
momentum
: 0.9
weight_decay
: 0.0005
lr_policy
: "inv"
gamma
: 0.0001
power
: 0.75
display
: 100
max_iter
: 10000
snapshot
: 5000
snapshot_prefix
: "lenet"
test_initialization
: false

What is the proper way to load my data into the memorydata layer?

Freerk Venhuizen

unread,
Sep 25, 2015, 10:08:02 AM9/25/15
to Caffe Users
solved!

I can now reproduce the results I get when using the LMDB as opposed to the memoryDataLayer.
Turns out it wasn't my python code that was wrong, but a bug in Caffe that has been solved recently (but not merged yet) https://github.com/BVLC/caffe/issues/2334

To solve this bug merge this PR: https://github.com/TJKlein/caffe/commit/5f1bb97a587043dbe0892466b866abfe4c76804c#diff-cb3aaf3630305ad72ba64135cd00b269L289
Also initialize data_ and labels_ to NULL in the constructor found in data_layers.hpp

template <typename Dtype>
class MemoryDataLayer : public BaseDataLayer<Dtype> {
 
public:
 
explicit MemoryDataLayer(const LayerParameter& param)
     
: BaseDataLayer<Dtype>(param), has_new_data_(false) {data_ = NULL; labels_ = NULL;}
 
virtual void DataLayerSetUp(const vector<Blob<Dtype>*>& bottom,
     
const vector<Blob<Dtype>*>& top);

After rebuilding all (unit) tests are still passed, and python can now correctly use the MemoryDataLayer.

Evan Shelhamer

unread,
Sep 25, 2015, 1:12:04 PM9/25/15
to Freerk Venhuizen, Caffe Users
Thanks for reporting this and confirming the solution! We'll try to loop back to that PR.
--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/8b6b670b-4c84-4e40-8a64-dd64da52165e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

jamesr...@gmail.com

unread,
Feb 21, 2016, 2:27:54 AM2/21/16
to Caffe Users
Thank you for your posts, I met the same problem with you.
And I'm surprised that now it's 2016.02.21, but the bug still exists.

BTW, the architecture of files has already changed. The fraction of code you mentioned in data_layers.hpp now are in memory_data_layer.hpp.
Reply all
Reply to author
Forward
0 new messages