How to backprop on luatorch in model:eval() mode

23 views
Skip to first unread message

Naman Jain

unread,
Oct 6, 2018, 12:49:28 PM10/6/18
to torch7
Hi,
There is an assert statement which prevents us from backpropogating  through a batch normalization error when the model is in eval mode. Is there any reason for this? In pytorch, there is no such restriction. Keeping the statistics of the batchnorm layer fixed while computing gradients should not be an issue in my opinion


Reply all
Reply to author
Forward
0 new messages