I have started by looking at the example in 'examples/mnist/mnist_autoencoder.prototxt', but I am not sure how to "tie" the weights. Namely, I am not sure how to initialize the gradient between the input/middle layer to the gradient between the middle/output layer.
Youssef Kashef
unread,
Sep 10, 2015, 1:13:57 PM9/10/15
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
The idea is to use the weight sharing feature supported by Caffe and define a new layer class that inherits from InnerProduct that transpose the weights before calling forward() and backward() of the parent class.