That's the way I have been doing it, there is probably a way to get caffe to output into a file natively, but I haven't tried looking for one.
I0307 17:59:21.575685 4338 hdf5_data_layer.cpp:66] Loading list of HDF5 filenames from: examples/FaceData.h5
I0307 18:01:10.133177 4338 hdf5_data_layer.cpp:80] Number of HDF5 files: 202358541
HDF5-DIAG: Error detected in HDF5 (1.8.11) thread 140690558573120:
#000: ../../../src/H5F.c line 1586 in H5Fopen(): unable to open file
major: File accessibilty
minor: Unable to open file
def mynet(db, batch_size,n_classes=11 ):
lr_mult1 = 1
lr_mult2 = 2
n=caffe.NetSpec()
n.data,n.label=L.Data(batch_size=batch_size,backend=P.Data.LMDB,source=db,transform_param=dict(scale=1./255),ntop=2)
n.conv1 = L.Convolution(n.data,param=[dict(lr_mult=lr_mult1),dict(lr_mult=lr_mult2)],
kernel_size=5,stride = 1, num_output=50,weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.conv2 = L.Convolution(n.pool1,param=[dict(lr_mult=lr_mult1),dict(lr_mult=lr_mult2)],
kernel_size=5,stride=1,num_output=20,weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.ip1 = L.InnerProduct(n.pool2,param=[dict(lr_mult=lr_mult1),dict(lr_mult=lr_mult2)],num_output=500,weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.ip1, in_place=True)
n.ip2 = L.InnerProduct(n.relu1,param=[dict(lr_mult=lr_mult1),dict(lr_mult=lr_mult2)],num_output=n_classes,weight_filler=dict(type='xavier'))
# n.accuracy = L.Accuracy(n.ip2,n.label)
n.loss = L.SoftmaxWithLoss(n.ip2,n.label)
return n.to_proto()
n.accuracy = L.Accuracy(n.ip2,n.label)
works as expectedok i had introduced a problem into the lmdb.
Still curious if I can use L.Accuracy(top,bottom) if anyone can weigh in , in meantime will try it and see what transpires.
On Thursday, March 3, 2016 at 4:59:45 PM UTC+2, Jeremy Rutman wrote:
I have spoken too soon it seems, after correcting my net generation I get the same error.
I generate a 'lenet' style architecture as below, which seems to work on other imag:e dbs but not this one. Incidentally will L.Accuracy be a valid layer?
Hi JD,
Is it true for both training and Testing ?