I'm faced with a strange problem: unable to train net using MemoryData Layer.
# read mnist data, train_data is a (50000,3,28,28), val_data is (10000,3,28,28)
index = 427
solver.net.set_input_arrays(train_data[index].reshape(1,3,28,28), train_label[index].reshape(1,1,1,1))
solver.test_nets[0].set_input_arrays(val_data[index].reshape(1,3,28,28), val_label[index].reshape(1,1,1,1))
# change to solver.step(1) is the same
solver.solve()
# check that when index changed
print solver.net.blobs['loss'].data
imshow(solver.test_nets[0].blobs['data'].data.reshape(3,28,28).transpose(1, 2, 0) * 255)
result = solver.test_nets[0].forward(start='data')
print result
When I change index, I checked by the follwing code and comfirm that indeed every time different data and label is read.
imshow(solver.test_nets[0].blobs['data'].data.reshape(3,28,28).transpose(1, 2, 0) * 255)
print val_label[tmp_index].reshape(1,1,1,1)
BUT, every time the 'result' is as below:
result: {'loss': array(87.3365478515625, dtype=float32), 'accuracy': array(1.0, dtype=float32), 'predic': array([[ nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]], dtype=float32)}
I searched a lot but have no ideal, could you please help me? Thanks in advance.
P.S.
1. net info:
[('data', (1, 3, 28, 28)),
('label', (1,)),
('conv1', (1, 20, 24, 24)),
('pool1', (1, 20, 12, 12)),
('conv2', (1, 50, 8, 8)),
('pool2', (1, 50, 4, 4)),
('ip1', (1, 500)),
('ip2', (1, 10)),
('ip2_ip2_0_split_0', (1, 10)),
('ip2_ip2_0_split_1', (1, 10)),
('predic', (1, 10)),
('loss', ())]
2. I have past the test, and the lmdb way works well for both example and custom training and prediction.