Hi,
I have seen that there has been many posts about designing a network that deal with transforming one image to another. A few are here:
But none of them clearly explain the details on how they went about doing this. I am able to figure out that the there needs to be two lmdb files -- one with input images and one with output images -- which I am able to generate with the python script.
What I am not completely sure is how to map these things in a train_val.prototxt file (to define a simple model) for a very simple convolution network to get started with starting to train. Especially the part where how to define labels to be the output images since the caffe labels can only be integers?
Is this documented somewhere? Am I missing any specific part of the docs or code?
Are there any train_val.prototxt files available anywhere which implements a similar network setup. I see many deploy files with the train and loss computation part removed from the train_val.prototxt but have not been able to find a one which shows the complete process.
Any help would be greatly appreciated.
Thanks,