First of all, InnerProduct does not give you probabilities but raw numbers. Softmax then normalizes these to probabilities.
If you go with the standard classification task, ie. use the SoftmaxWithLoss, probabilities are not really exposed to you during training (so you can't manipulate them). For testing this becomes just a Softmax layer and then you can do what you want - including summing two vectors with Eltwise and scaling them with the
Power layer (it does a general per-element exponentiation:
(a+b*x)^c).
If you need to add those vectors during training, you can do that with Eltwise too, directly on the InnerProduct output, before Softmax. Softmax will then do the probability normalization for you, so you don't need any tricks with scaling by 0.5 etc - I do just that and it works well.