Eltwise layer - usage

257 views
Skip to first unread message

Aleksander Kużel

unread,
Feb 26, 2017, 9:36:02 AM2/26/17
to Caffe Users
Hi all, 

could you explain something for me?

Let's say I have 2 InnerProducts. Each will return vector. 
I have 1000 classes, so as I understand it will be a vector with probabilities. It's class 0 with probability 0.001, it's class 1 with probability 0.9 and so on. 

Next step, I have those 2 vectors and I want to add them.  Here I use Eltwise layer. 

And here is my question. Let's say from 1 vector I get that it's class 1 with probability 0.9 
From 2 vector I get that it's class 1 with probability 0.8. 

After Eltwise probability will be 1.7. 

Thus, is there some way to divide each element by 2 so probability won't be higher than 100%? Or maybe caffe take care of this problem on its own? 

Thank you and sorry if my question is stupid because I understood something wrong.


Aleksander Kużel

unread,
Feb 28, 2017, 4:32:24 PM2/28/17
to Caffe Users
up

Przemek D

unread,
Mar 3, 2017, 2:11:32 AM3/3/17
to Caffe Users
First of all, InnerProduct does not give you probabilities but raw numbers. Softmax then normalizes these to probabilities.
If you go with the standard classification task, ie. use the SoftmaxWithLoss, probabilities are not really exposed to you during training (so you can't manipulate them). For testing this becomes just a Softmax layer and then you can do what you want - including summing two vectors with Eltwise and scaling them with the Power layer (it does a general per-element exponentiation: (a+b*x)^c).
If you need to add those vectors during training, you can do that with Eltwise too, directly on the InnerProduct output, before Softmax. Softmax will then do the probability normalization for you, so you don't need any tricks with scaling by 0.5 etc - I do just that and it works well.
Reply all
Reply to author
Forward
0 new messages