Hey all,
I'm training a GAN using the traditional generator and discriminator loss functions, being -log(D(G(z)) and -log(D(x)) -log(1 - D(G(z)) respectively.
For some reason the loss for the discriminator and the generator are always constant, with the generator loss being close to 0 and the discriminator loss being proportional to the batch size.
Changing the loss functions to categorical cross-entropy on two labels, real or fake, the network behaves normally.
Does anyone have an idea how to solve this? The code in on the link below.