Tied weights - Is it really necessary

85 views
Skip to first unread message

Prasanna S

unread,
Aug 21, 2015, 1:34:50 PM8/21/15
to Caffe Users
What is the best way to figure out whether tying the weights of the autoencoder is necessary or not?

Can't the network learn the transposed weights if that is the most optimal representation?

Tarik Arici

unread,
Aug 26, 2015, 6:03:55 PM8/26/15
to Caffe Users
It can, but NNs are optimizing a loss function by doing stochastic gradient descent which will be stuck at a local minimum. If you have reason to believe that the features you project on to, needs to be the same as the kernels you use for reconstruction then you should impose it by tying the weights.

After all CNNs are also applying the same approach. It is tying the weights of all units in a channel. Even if we dont have any computational constraints convolutional layers will still be very usefull in learning from images for many reasons...
Reply all
Reply to author
Forward
0 new messages