input: 'data'input_dim: 1input_dim: 3input_dim: 500input_dim: 500
where as yours is
input_dim: Ninput_dim: 3input_dim: 500input_dim: 500
I think first follow their pattern and see.
regards
ihsan
Thanks Ihsan. I think you are referring to the deploy.prototxt. I am trying to train the network, so I am working with the train_val.prototxt instead.
My Data layer looks like the following, and sets the batch size to 1.
layer {
name: "data"
type: "Data"
top: "data"
include {
phase: TRAIN
}
transform_param {
mean_value: 104.00699
mean_value: 116.66877
mean_value: 122.67892
}
data_param {
source: "path/to/my/lmdb"
batch_size: 1
backend: LMDB
}
}
In any case, I have modified my build to reshape layers that do not match the expected size. This seems to work. The value of the loss function jumps around a lot; I am training on an extremely small data set just to check that the process works. So far the network predicts 0 for all pixels of the test image; I suspect that this is due training only 10k iterations on an inadequate data set.
I0607 19:12:19.261982 15288 solver.cpp:214] Iteration 10080, loss = 9034.48
I0607 19:12:19.262035 15288 solver.cpp:229] Train net output #0: loss = 16185.3 (* 1 = 16185.3 loss)
I0607 19:12:19.262049 15288 solver.cpp:489] Iteration 10080, lr = 1e-10
I0607 19:12:31.258111 15288 solver.cpp:214] Iteration 10100, loss = 7571.7
I0607 19:12:31.258159 15288 solver.cpp:229] Train net output #0: loss = 19408.8 (* 1 = 19408.8 loss)
I0607 19:12:31.258173 15288 solver.cpp:489] Iteration 10100, lr = 1e-10
I0607 19:12:43.530395 15288 solver.cpp:214] Iteration 10120, loss = 8542.87
I0607 19:12:43.530436 15288 solver.cpp:229] Train net output #0: loss = 13480.4 (* 1 = 13480.4 loss)
I0607 19:12:43.530450 15288 solver.cpp:489] Iteration 10120, lr = 1e-10
I0607 19:12:55.746253 15288 solver.cpp:214] Iteration 10140, loss = 5557.32
I0607 19:12:55.746295 15288 solver.cpp:229] Train net output #0: loss = 0.00578489 (* 1 = 0.00578489 loss)
I0607 19:12:55.746309 15288 solver.cpp:489] Iteration 10140, lr = 1e-10
I0607 19:13:07.895169 15288 solver.cpp:214] Iteration 10160, loss = 7662.01
I0607 19:13:07.895215 15288 solver.cpp:229] Train net output #0: loss = 14601.2 (* 1 = 14601.2 loss)
I0607 19:13:07.895231 15288 solver.cpp:489] Iteration 10160, lr = 1e-10
I0607 19:13:20.042099 15288 solver.cpp:214] Iteration 10180, loss = 6612.53
I0607 19:13:20.042146 15288 solver.cpp:229] Train net output #0: loss = 9599.04 (* 1 = 9599.04 loss)
I0607 19:13:20.042161 15288 solver.cpp:489] Iteration 10180, lr = 1e-10
I0607 19:13:32.172777 15288 solver.cpp:214] Iteration 10200, loss = 9919.12
I0607 19:13:32.172821 15288 solver.cpp:229] Train net output #0: loss = 0.565339 (* 1 = 0.565339 loss)
I0607 19:13:32.172834 15288 solver.cpp:489] Iteration 10200, lr = 1e-10
I0607 19:13:44.346729 15288 solver.cpp:214] Iteration 10220, loss = 10103.3
I0607 19:13:44.346784 15288 solver.cpp:229] Train net output #0: loss = 14821.5 (* 1 = 14821.5 loss)
I0607 19:13:44.346798 15288 solver.cpp:489] Iteration 10220, lr = 1e-10
^C^C^C^CI0607 19:13:56.368582 15288 solver.cpp:214] Iteration 10240, loss = 8975.04
I0607 19:13:56.368634 15288 solver.cpp:229] Train net output #0: loss = 0.00257222 (* 1 = 0.00257222 loss)
I0607 19:13:56.368646 15288 solver.cpp:489] Iteration 10240, lr = 1e-10
Is reshaping the blob the right approach?
The script I am using is broadly the same as the example Evan posted in a PR and in a few emails; I don't have access to it at the moment.
I understood the network to be predicting a dense matrix of class labels that are represented by integers. The labels can be mapped to colors for visualization. I did test using label data with three channels; Caffe warned that the number of predictions was one third the number of labels. This seemed to corroborate using one channel for the labels. Maybe someone familiar with this work can weigh in.
Gavin
--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/58637315-2b32-42e4-b04f-b7def9e0043e%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/CA%2BdcAtzJpPihsKMk4GtZD4zKNLkj9Ygsd0XgAcA3ofNUjmf2Hg%40mail.gmail.com.