Hi, I'd like to train a network with a softmax/cross entropy loss function. My labels are full probability distributions, not just one-hot vectors. Does Caffe have the capability to calculate the full cross entropy loss function? Neither the multinomial logistic loss layer nor the softmax loss layer accept a probability distribution as a label, just one-hot vectors specified in terms of the index of the true label. I can implement it myself, but first I'd like to confirm that Caffe doesn't already include it.
For clarity, I don't want to use the sigmoid cross entropy loss layer, because the sigmoid function doesn't produce a probability distribution.