local fGx = function(x) netD:apply(function(m) if torch.type(m):find('Convolution') then m.bias:zero() end end) netG:apply(function(m) if torch.type(m):find('Convolution') then m.bias:zero() end end)
gradParametersG:zero()
--[[ the three lines below were already executed in fDx, so save computation noise:uniform(-1, 1) -- regenerate random noise local fake = netG:forward(noise) input:copy(fake) ]]-- label:fill(real_label) -- fake labels are real for generator cost
local output = netD.output -- netD:forward(input) was already executed in fDx, so save computation errG = criterion:forward(output, label) local df_do = criterion:backward(output, label) local df_dg = netD:updateGradInput(input, df_do)
netG:backward(noise, df_dg) return errG, gradParametersG end
Firstly, notice that a call to updateGradInput does not compute a backpropagation step. In order to do so, you would also need to call accGradParameters, which is not called on D in fGx. You can try this by calling multiple times updateGradInput, you will see that the gradient returned is always the same.
Secondly, why is necessary to call updateGradInput on D if we are not
updating D at all? Because we need the gradients with respect to the
fake images generated by G (input
in fGx). This is what updateGradInput returns, df_dg
.
So, once we get df_dg
, we are now able to do a backward step on G with the correct gradients:
netG:backward(noise, df_dg)
PS: Notice that df_do
does not contain the gradients of the generated images input
but the gradients of D output, which is a one dimensional vector. That's the reason we can't use df_do
on G's backward step.
--
You received this message because you are subscribed to a topic in the Google Groups "torch7" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/torch7/GqrQxUOvhXM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to torch7+un...@googlegroups.com.
To post to this group, send email to tor...@googlegroups.com.
Visit this group at https://groups.google.com/group/torch7.
For more options, visit https://groups.google.com/d/optout.