I have a working network for pixel-wise image segmentation, i.e. two images as input one as data one as label (ground_truth). Therefore, I am using a `SoftmaxWithLoss` layer as shown below:
layer {
name: "conv"
type: "Convolution"
bottom: "bottom"
top: "conv"
convolution_param {
num_output: 256 # <-- "256 classes"
...
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "data"
bottom: "label"
top: "loss"
}
My input image values range from [0-255], that is why I have 256 classes.
Now I want to transform this segmentation / classification task to a regression task. I assumed the only things I have to change is the `loss layer` and the `num_output` in the `convolution layer` like this:
layer {
name: "conv"
type: "Convolution"
bottom: "bottom"
top: "conv"
convolution_param {
num_output: 1 # <-- "regression"
...
}
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "data"
bottom: "label"
top: "loss"
}
My regression task does not produce satisfying results at all.
I think the problem for the regression task compared to the classification task is that for the classification task I have good a distribution over all depth values --> `num_output = 256` --> In the end I have a result of `256 x 128 x x128` that means one probability for each `depth class`. For the regression task my output is `num_output = 1` which only returns `one depth value` per pixel rather than `256` for the classification task.
1. Classification:
Does anyone know how to adjust the `softmax layer` in order to produce results like: `256 x a_number_greater_1 x width x depth` rather than `256 x width x depth`.
or
2. Regression:
Is there any other approach to achieve results as a distribution of probabilities like the `softmax layer` does?