Why use scale in DropyoutLayer?

126 views
Skip to first unread message

Kun Wang

unread,
Oct 5, 2015, 11:40:05 PM10/5/15
to Caffe Users
Hi all,

When I was viewing the code in neuron_layers.hpp, the class member scale_ in DropoutLayer confused me. It said the scale_ is "the scale for undropped inputs at train time 1/(1-p)". Why don't we just leave undropped inputs unchanged(i.e. multiply by 1 instead of 1(1-p)), which seems make more sense to me.

Sorry if I missd something.

Kun

Ronghang Hu

unread,
Oct 6, 2015, 12:11:29 AM10/6/15
to Caffe Users
The current behavior is implemented so that dropout layer can be used simply as a regularizer at training time, and can be removed at test time.

If you leave undropped inputs unchanged during training, then you'll have to scale bottom by 1/(1-p) at test time, and dropout layer must be kept, which is sort of inconvenient.

Ronghang Hu

unread,
Oct 6, 2015, 12:13:37 AM10/6/15
to Caffe Users
Correction: If you leave undropped inputs unchanged during training, then you'll have to scale bottom by 1-p at test time
Reply all
Reply to author
Forward
0 new messages