Understanding weight sharing in a Siamese network

87 views
Skip to first unread message

Ilya Zhenin

unread,
Apr 29, 2016, 6:52:34 AM4/29/16
to Caffe Users
What does weigth sharing mean? According to an example of siamese net we give same name to parameters in convolutional and pooling layers. Does it mean that these two networks will have same set of filters(on set of filters) that will be learned and used in both sides of net? If so, what is the point of splitting data in two nets if we can keep them in single blob with multiple channels(since each layer has different set of filters, or not?)



Jan

unread,
Apr 29, 2016, 7:01:46 AM4/29/16
to Caffe Users
Yes, in caffe weight sharing means using the same set of parameters, i.e. there exists really only one set of parameters, but this set is used by two (or more) layers.

On the siamese network I cannot comment, I haven't had a look at this type of network yet.

Jan

Nate Ting

unread,
May 24, 2017, 12:13:04 AM5/24/17
to Caffe Users
One good thing that I noticed for this practice is that siamese networks can be plotted as they originally are, you can draw the siamese parts out separately, easy to watch, easy to toggle. And yeah, same can be done without splitting data into two nets, that would not be so implicit. 
Reply all
Reply to author
Forward
0 new messages