Hello,
I'm trying to train and test a CNN with my own data. For that, I already ran a bash file on the command line and I got 0,867 of accuracy.
Now I want to analyze the results using python so I tried to make a script to run the net there. I used some of the code of
this example, but I got 0,27 of accuracy (test_acc). Adding a new variable "test_acc1" which gets the values from the accuracy blob, I get 0,867 again. I just don't understand why the "test_acc" value is different since the calculation method is the same of the accuracy layer. And I would like to know which value is the correct one.
niter = 10000
test_interval = 500
train_loss = zeros(niter)
test_acc = zeros(int(np.ceil(niter / test_interval)))
test_acc1 = zeros(int(np.ceil(niter / test_interval)))
output = zeros((niter, 8, 2))
# the main solver loop
for it in range(niter):
solver.step(1) # SGD by Caffe
# store the train loss
train_loss[it] = solver.net.blobs['loss'].data
# store the output on the first test batch
# (start the forward pass at conv1 to avoid loading new data)
solver.test_nets[0].forward(start='conv1')
output[it] = solver.test_nets[0].blobs['score'].data[:8]
# each output is (batch size, feature dim, spatial dim)
[(k, v.data.shape) for k, v in solver.net.blobs.items()]
# just print the weight sizes (we'll omit the biases)
[(k, v[0].data.shape) for k, v in solver.net.params.items()]
# run a full test every so often
# (Caffe can also do this for us and write to a log, but we show here
# how to do it directly in Python, where more complicated things are easier.)
if it % test_interval == 0:
print 'Iteration', it, 'testing...'
correct = 0
for test_it in range(100):
solver.test_nets[0].forward()
correct += sum(solver.test_nets[0].blobs['score'].data.argmax(1)
== solver.test_nets[0].blobs['label'].data)
test_acc[it // test_interval] = correct / 1e4
test_acc1[it // test_interval] = solver.test_nets[0].blobs['accuracy'].data
Also, I have 90 images for training and 30 for testing. The data batch_size for each one of them are 90 and 30 respectively.
Thank you in advance!