Hi,
I am a new caffe user and I just need some help. I am trying to run existing networks with a custom dataset. I have successfully done it for caffenet.
I want to train leNet network. I am using Pascal dataset. Let’s say I want to classify images on 2 classes , depicting airplane or not.
I am based on this tutorial http://sites.duke.edu/rachelmemo/2015/04/03/train-and-test-lenet-on-your-own-dataset/. But I have some questions on that.
1) On tutorial in 2. , it says to customize inner_product_param: num_output: #labels, on enet.prototxt and lenet_train_test.prototxt. I guess I have to change that parameter on layer "ip2", from what I understand is the second Layer of the Full connected NN. By default value is num_output: 10. This parameter indicates how many outlets will have that classic neuronal. So if I want to classify on whether or not the image is a plane, (classes -1 and 1), should I set up num_output parameter: 2 ???
2) In lenet_solver, somewhere says parameter test itter. I did not understand exactly what it does. The Code states "test_iter specifies how many forward passes the test should carry out.". On the other hand, the guide says: «test_iter: #test batches (the total test images evaluated in TRAIN phase = test_iter * TEST batch size)». Somewhere else I saw the comment «Test batch size can be set to any value (well, any positive value) as long as the batch size times the test iterations equals the number of test data points.»
3) Based on the previous, how much should i set train and test batch size ?? In caffenet I had used on 64 train and 12 validation and clutching memory around 1.5 GB on the gpu. Here in lenet why test batch is larger than train ?? It probably wont fit in my gpu (2GB) with first try, i will reduce batch size. Can I put in train batch larger value than in the test?? Also, how will it be affected by the test_iter, on 2), the requirement in memory on gpu; ???
-4) Finally, I saw on this tutorial, that lenet.prototxt file requires also modification. Is that right ??? Since the architecture used by lenet_solver, is lenet_train_test.prototxt, do we need to modify lenet.prototxt? Generally, lenet.prototxt, defines the same architecture, simply wtih no testing phase.The guide says to make alterations in line input_param {shape: {dim: 64 dim : 1 dim: 28 dim: 28}, where values correspond to input_dim: (batchsize, #channels, imagesize1 , imagesize2), and inner product as in 1). It seems to me that lenet.prototxt doesn’t take part somewhere. Should I leave it as is;
Thanks in advance and sry for long post,
Hi,
I am a new caffe user and I just need some help. I am trying to run existing networks with a custom dataset. I have successfully done it for caffenet.
I want to train leNet network. I am using Pascal dataset. Let’s say I want to classify images on 2 classes , depicting airplane or not.
I am based on this tutorial http://sites.duke.edu/rachelmemo/2015/04/03/train-and-test-lenet-on-your-own-dataset/. But I have some questions on that.
1) On tutorial in 2. , it says to customize inner_product_param: num_output: #labels, on enet.prototxt and lenet_train_test.prototxt. I guess I have to change that parameter on layer "ip2", from what I understand is the second Layer of the Full connected NN. By default value is num_output: 10. This parameter indicates how many outlets will have that classic neuronal. So if I want to classify on whether or not the image is a plane, (classes -1 and 1), should I set up num_output parameter: 2 ???
2) In lenet_solver, somewhere says parameter test itter. I did not understand exactly what it does. The Code states "test_iter specifies how many forward passes the test should carry out.". On the other hand, the guide says: «test_iter: #test batches (the total test images evaluated in TRAIN phase = test_iter * TEST batch size)». Somewhere else I saw the comment «Test batch size can be set to any value (well, any positive value) as long as the batch size times the test iterations equals the number of test data points.»
3) Based on the previous, how much should i set train and test batch size ?? In caffenet I had used on 64 train and 12 validation and clutching memory around 1.5 GB on the gpu. Here in lenet why test batch is larger than train ?? It probably wont fit in my gpu (2GB) with first try, i will reduce batch size. Can I put in train batch larger value than in the test?? Also, how will it be affected by the test_iter, on 2), the requirement in memory on gpu; ???
-4) Finally, I saw on this tutorial, that lenet.prototxt file requires also modification. Is that right ??? Since the architecture used by lenet_solver, is lenet_train_test.prototxt, do we need to modify lenet.prototxt? Generally, lenet.prototxt, defines the same architecture, simply wtih no testing phase.The guide says to make alterations in line input_param {shape: {dim: 64 dim : 1 dim: 28 dim: 28}, where values correspond to input_dim: (batchsize, #channels, imagesize1 , imagesize2), and inner product as in 1). It seems to me that lenet.prototxt doesn’t take part somewhere. Should I leave it as is;
Thanks in advance and sry for long post,